Result | FAILURE |
Tests | 17 failed / 768 succeeded |
Started | |
Elapsed | 41m17s |
Revision | master |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\sExternal\sStorage\s\[Driver\:\sebs\.csi\.aws\.com\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\sfsgroupchangepolicy\s\(OnRootMismatch\)\[LinuxOnly\]\,\spod\screated\swith\san\sinitial\sfsgroup\,\svolume\scontents\sownership\schanged\svia\schgrp\sin\sfirst\spod\,\snew\spod\swith\sdifferent\sfsgroup\sapplied\sto\sthe\svolume\scontents$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214 Jan 3 12:23:08.817: Unexpected error: <*errors.errorString | 0xc004b909e0>: { s: "pod \"pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67\" is not Running: timed out waiting for the condition", } pod "pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67" is not Running: timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:262from junit_17.xml
[BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 12:18:06.351: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename fsgroupchangepolicy �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:214 Jan 3 12:18:07.577: INFO: Creating resource for dynamic PV Jan 3 12:18:07.577: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass fsgroupchangepolicy-8728-e2e-sc5hp94 �[1mSTEP�[0m: creating a claim Jan 3 12:18:07.754: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil �[1mSTEP�[0m: Creating Pod in namespace fsgroupchangepolicy-8728 with fsgroup 1000 Jan 3 12:23:08.817: FAIL: Unexpected error: <*errors.errorString | 0xc004b909e0>: { s: "pod \"pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67\" is not Running: timed out waiting for the condition", } pod "pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67" is not Running: timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.createPodAndVerifyContentGid(0xc001acd1e0, 0xc00105f070, 0x1, {0x0, 0x0}, {0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:262 +0x15c k8s.io/kubernetes/test/e2e/storage/testsuites.(*fsGroupChangePolicyTestSuite).DefineTests.func3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:233 +0x296 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0003c1380, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: Deleting pvc Jan 3 12:23:09.167: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.com9cxjh" Jan 3 12:23:09.344: INFO: Waiting up to 3m0s for PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 to get deleted Jan 3 12:23:09.519: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (175.065593ms) Jan 3 12:23:14.695: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (5.350716895s) Jan 3 12:23:19.870: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (10.526273087s) Jan 3 12:23:25.046: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (15.702496354s) Jan 3 12:23:30.222: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (20.878105956s) Jan 3 12:23:35.398: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (26.053738567s) Jan 3 12:23:40.574: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (31.229609352s) Jan 3 12:23:45.750: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (36.406301592s) Jan 3 12:23:50.926: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (41.581715688s) Jan 3 12:23:56.102: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (46.758030551s) Jan 3 12:24:01.277: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (51.933352524s) Jan 3 12:24:06.453: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (57.108956621s) Jan 3 12:24:11.631: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (1m2.286703634s) Jan 3 12:24:16.807: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (1m7.462810383s) Jan 3 12:24:21.985: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (1m12.640711413s) Jan 3 12:24:27.162: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (1m17.818531907s) Jan 3 12:24:32.340: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (1m22.99615192s) Jan 3 12:24:37.518: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (1m28.173974986s) Jan 3 12:24:42.697: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (1m33.353459223s) Jan 3 12:24:47.876: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (1m38.532409885s) Jan 3 12:24:53.052: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (1m43.708164682s) Jan 3 12:24:58.231: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (1m48.887224494s) Jan 3 12:25:03.410: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (1m54.065685821s) Jan 3 12:25:08.585: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (1m59.241094423s) Jan 3 12:25:13.768: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (2m4.423599436s) Jan 3 12:25:18.943: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (2m9.599439773s) Jan 3 12:25:24.119: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (2m14.77551609s) Jan 3 12:25:29.295: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (2m19.950828377s) Jan 3 12:25:34.472: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (2m25.128208955s) Jan 3 12:25:39.648: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (2m30.303859706s) Jan 3 12:25:44.823: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (2m35.479126759s) Jan 3 12:25:50.002: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (2m40.657642243s) Jan 3 12:25:55.177: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (2m45.833171754s) Jan 3 12:26:00.357: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (2m51.013089227s) Jan 3 12:26:05.533: INFO: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 found and phase=Bound (2m56.189333459s) �[1mSTEP�[0m: Deleting sc Jan 3 12:26:10.725: FAIL: while cleanup resource Unexpected error: <errors.aggregate | len:1, cap:1>: [ [ { msg: "persistent Volume pvc-7b0a5289-add7-4d8d-8491-609374832515 not deleted by dynamic provisioner: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 still exists within 3m0s", err: { s: "PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 still exists within 3m0s", }, }, ], ] persistent Volume pvc-7b0a5289-add7-4d8d-8491-609374832515 not deleted by dynamic provisioner: PersistentVolume pvc-7b0a5289-add7-4d8d-8491-609374832515 still exists within 3m0s occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*fsGroupChangePolicyTestSuite).DefineTests.func2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:132 +0x20f panic({0x6c3d000, 0xc0050909c0}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0x73 panic({0x62d5960, 0x78abec0}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0008c03c0, 0x13b}, {0xc003d62d30, 0x70cbf6a, 0xc003d62d50}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7 k8s.io/kubernetes/test/e2e/framework.Fail({0xc0008c0280, 0x126}, {0xc0046854a0, 0xc0008c0280, 0xc00411a270}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:63 +0x149 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc003d62e98, {0x79beb98, 0xaa12408}, 0x0, {0x0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:79 +0x1bd k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc003d62e98, {0x79beb98, 0xaa12408}, {0x0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0x92 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x7b06bd0, {0x78b2200, 0xc004b909e0}, {0x0, 0x2, 0xc001adf920}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xa9 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40 k8s.io/kubernetes/test/e2e/storage/testsuites.createPodAndVerifyContentGid(0xc001acd1e0, 0xc00105f070, 0x1, {0x0, 0x0}, {0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:262 +0x15c k8s.io/kubernetes/test/e2e/storage/testsuites.(*fsGroupChangePolicyTestSuite).DefineTests.func3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/fsgroupchangepolicy.go:233 +0x296 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0003c1380, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "fsgroupchangepolicy-8728". �[1mSTEP�[0m: Found 16 events. Jan 3 12:26:10.903: INFO: At 2023-01-03 12:18:07 +0000 UTC - event for ebs.csi.aws.com9cxjh: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Jan 3 12:26:10.903: INFO: At 2023-01-03 12:18:08 +0000 UTC - event for ebs.csi.aws.com9cxjh: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator Jan 3 12:26:10.903: INFO: At 2023-01-03 12:18:08 +0000 UTC - event for ebs.csi.aws.com9cxjh: {ebs.csi.aws.com_ebs-csi-controller-74ccd5888c-qh2jn_810d5c96-d818-4fac-a02d-8d2e30f39a40 } Provisioning: External provisioner is provisioning volume for claim "fsgroupchangepolicy-8728/ebs.csi.aws.com9cxjh" Jan 3 12:26:10.903: INFO: At 2023-01-03 12:18:11 +0000 UTC - event for ebs.csi.aws.com9cxjh: {ebs.csi.aws.com_ebs-csi-controller-74ccd5888c-qh2jn_810d5c96-d818-4fac-a02d-8d2e30f39a40 } ProvisioningSucceeded: Successfully provisioned volume pvc-7b0a5289-add7-4d8d-8491-609374832515 Jan 3 12:26:10.903: INFO: At 2023-01-03 12:18:12 +0000 UTC - event for pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67: {default-scheduler } Scheduled: Successfully assigned fsgroupchangepolicy-8728/pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67 to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:26:10.903: INFO: At 2023-01-03 12:18:14 +0000 UTC - event for pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-7b0a5289-add7-4d8d-8491-609374832515" Jan 3 12:26:10.903: INFO: At 2023-01-03 12:18:19 +0000 UTC - event for pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "4fa25e41e414872100063f2641e690fff1266968406b61bffd1acc06551e03c5": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:26:10.903: INFO: At 2023-01-03 12:18:33 +0000 UTC - event for pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b49d30c9c50934993e77393eb781311e94fcd7b284f2e60fb56f858eca6c0757": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:26:10.903: INFO: At 2023-01-03 12:18:44 +0000 UTC - event for pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d8cca524c644219bc48539a1b75383192e466a0e9db87852ac6dd0e0b0415932": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:26:10.903: INFO: At 2023-01-03 12:18:57 +0000 UTC - event for pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1fbeccdc64aac925cc323c4ecc79491f29c1bd9477d74b4751a5c6f6b0ca77dd": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:26:10.903: INFO: At 2023-01-03 12:19:12 +0000 UTC - event for pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3e27c8e82bab1a30ac892b938e2fb95ca52fb8eba568146da31b5852084f3941": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:26:10.903: INFO: At 2023-01-03 12:19:25 +0000 UTC - event for pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ec546e3425b8bdb1de26ec10c1e4167a2299535f345f0f4f592a2492ff6d85df": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:26:10.903: INFO: At 2023-01-03 12:19:36 +0000 UTC - event for pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "61747d6a98d91eac427f01c6dcd873850cf7427ca9c5e850f77889476c69edee": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:26:10.903: INFO: At 2023-01-03 12:19:49 +0000 UTC - event for pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "300bdbb36f14517cfcb564aaae05e22f4997f0e4c518a3dac9035418ea39db2d": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:26:10.903: INFO: At 2023-01-03 12:20:01 +0000 UTC - event for pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "637ddec1606f22767d08265a374dd2954d7b83b7bc001b7fe8c70956a105bbdc": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:26:10.903: INFO: At 2023-01-03 12:20:15 +0000 UTC - event for pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "106f3ee43375252b157157f7dacbd9e99d0ba3439bf11e6d683e1a10d1833c58": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:26:11.078: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 12:26:11.078: INFO: pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67 ip-172-20-33-54.ap-northeast-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:18:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:24:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:24:37 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:18:12 +0000 UTC }] Jan 3 12:26:11.078: INFO: Jan 3 12:26:11.442: INFO: Logging node info for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:26:11.617: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-33-54.ap-northeast-2.compute.internal feeb853a-f938-421c-a48c-593d753497df 23029 0 2023-01-03 12:09:24 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-33-54.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-33-54.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02f9cef67ede2f5b0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-03 12:17:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2023-01-03 12:17:46 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-02f9cef67ede2f5b0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:25:43 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:25:43 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:25:43 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:25:43 +0000 UTC,LastTransitionTime:2023-01-03 12:09:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.33.54,},NodeAddress{Type:ExternalIP,Address:43.201.108.232,},NodeAddress{Type:Hostname,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-108-232.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2b83476f0693e43ae0a06bf0db9bb4,SystemUUID:ec2b8347-6f06-93e4-3ae0-a06bf0db9bb4,BootID:2182b644-e5a3-4e7d-a07b-8b550578833d,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-000ea8055502b2cd5 kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-000ea8055502b2cd5,DevicePath:,},},Config:nil,},} Jan 3 12:26:11.618: INFO: Logging kubelet events for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:26:11.795: INFO: Logging pods the kubelet thinks is on node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:26:11.977: INFO: pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67 started at 2023-01-03 12:18:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:11.977: INFO: Container write-pod ready: true, restart count 0 Jan 3 12:26:11.977: INFO: test-ss-1 started at 2023-01-03 12:26:08 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:11.977: INFO: Container webserver ready: true, restart count 0 Jan 3 12:26:11.977: INFO: ebs-csi-node-lpbdv started at 2023-01-03 12:09:25 +0000 UTC (0+3 container statuses recorded) Jan 3 12:26:11.977: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:26:11.977: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:26:11.977: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:26:11.977: INFO: execpodc9t47 started at 2023-01-03 12:25:34 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:11.977: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:26:11.977: INFO: inline-volume-tester-9zpnm started at 2023-01-03 12:25:41 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:11.977: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 3 12:26:11.977: INFO: hostexec-ip-172-20-33-54.ap-northeast-2.compute.internal-cs5wx started at 2023-01-03 12:25:46 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:11.977: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:26:11.977: INFO: inline-volume-tester-nch9r started at 2023-01-03 12:18:41 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:11.977: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 3 12:26:11.977: INFO: pod-bb73a438-bb41-4a0a-8f15-8c1ce4b15626 started at 2023-01-03 12:25:59 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:11.977: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:26:11.977: INFO: affinity-nodeport-timeout-225bm started at 2023-01-03 12:24:42 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:11.977: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 Jan 3 12:26:11.977: INFO: busybox-113a8ec1-8769-4eed-b07b-3de46bcbab1a started at 2023-01-03 12:23:54 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:11.977: INFO: Container busybox ready: false, restart count 0 Jan 3 12:26:11.977: INFO: affinity-nodeport-transition-zmr5k started at 2023-01-03 12:25:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:11.977: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jan 3 12:26:11.977: INFO: pod-projected-secrets-fe8b75d9-693d-4a5d-ba29-28607d708ade started at 2023-01-03 12:26:10 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:11.977: INFO: Container projected-secret-volume-test ready: true, restart count 0 Jan 3 12:26:11.977: INFO: agnhost started at 2023-01-03 12:26:01 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:11.977: INFO: Container agnhost ready: false, restart count 0 Jan 3 12:26:11.977: INFO: cilium-zp2v2 started at 2023-01-03 12:09:25 +0000 UTC (1+1 container statuses recorded) Jan 3 12:26:11.977: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:26:11.977: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:26:12.615: INFO: Latency metrics for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:26:12.615: INFO: Logging node info for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:26:12.790: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-52.ap-northeast-2.compute.internal c3f1ba3a-309d-47d4-9106-f4b4ecf80ce1 23959 0 2023-01-03 12:09:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-52.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-37-52.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-3851":"ip-172-20-37-52.ap-northeast-2.compute.internal","csi-hostpath-ephemeral-9442":"ip-172-20-37-52.ap-northeast-2.compute.internal","ebs.csi.aws.com":"i-0188365058f7426fb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-03 12:25:51 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-03 12:25:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0188365058f7426fb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:25:59 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:25:59 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:25:59 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:25:59 +0000 UTC,LastTransitionTime:2023-01-03 12:09:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.52,},NodeAddress{Type:ExternalIP,Address:54.180.156.67,},NodeAddress{Type:Hostname,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-180-156-67.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2beb620baecd9cbda3e3db4fc66fe2,SystemUUID:ec2beb62-0bae-cd9c-bda3-e3db4fc66fe2,BootID:e8a83e85-3d73-4732-8b1e-2d93bbf7f6bc,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9442^c1a07c30-8b61-11ed-8630-ee06f621d7af],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9442^c1a07c30-8b61-11ed-8630-ee06f621d7af,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-3851^cd8f1405-8b61-11ed-922c-9e64de886856,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-3851^cd9011cb-8b61-11ed-922c-9e64de886856,DevicePath:,},},Config:nil,},} Jan 3 12:26:12.791: INFO: Logging kubelet events for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:26:12.971: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:26:13.157: INFO: coredns-867df8f45c-js4mj started at 2023-01-03 12:10:00 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:13.157: INFO: Container coredns ready: true, restart count 0 Jan 3 12:26:13.157: INFO: csi-hostpathplugin-0 started at 2023-01-03 12:25:42 +0000 UTC (0+7 container statuses recorded) Jan 3 12:26:13.157: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container hostpath ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:26:13.157: INFO: cilium-v6smb started at 2023-01-03 12:09:20 +0000 UTC (1+1 container statuses recorded) Jan 3 12:26:13.157: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:26:13.157: INFO: ebs-csi-node-fkxkq started at 2023-01-03 12:09:20 +0000 UTC (0+3 container statuses recorded) Jan 3 12:26:13.157: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:26:13.157: INFO: csi-hostpathplugin-0 started at 2023-01-03 12:26:07 +0000 UTC (0+7 container statuses recorded) Jan 3 12:26:13.157: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container hostpath ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:26:13.157: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:26:13.157: INFO: inline-volume-tester-gk7xn started at 2023-01-03 12:26:10 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:13.157: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:26:13.157: INFO: inline-volume-tester-b2kxk started at 2023-01-03 12:25:50 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:13.157: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 3 12:26:13.768: INFO: Latency metrics for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:26:13.768: INFO: Logging node info for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:26:13.995: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-48-181.ap-northeast-2.compute.internal 96573984-3972-4958-a21d-91e5b7179ec3 22557 0 2023-01-03 12:09:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-48-181.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-48-181.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2507":"ip-172-20-48-181.ap-northeast-2.compute.internal","csi-hostpath-ephemeral-9375":"ip-172-20-48-181.ap-northeast-2.compute.internal","ebs.csi.aws.com":"i-0c02313085f6ea916"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:17:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0c02313085f6ea916,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.48.181,},NodeAddress{Type:ExternalIP,Address:43.201.60.170,},NodeAddress{Type:Hostname,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-60-170.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2eb3762b49570c6e3d8607a5e516da,SystemUUID:ec2eb376-2b49-570c-6e3d-8607a5e516da,BootID:bd8a3c9d-ca15-400a-bb20-3a3e2aa04c7f,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:26:13.995: INFO: Logging kubelet events for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:26:14.185: INFO: Logging pods the kubelet thinks is on node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:26:14.368: INFO: ebs-csi-node-5drk2 started at 2023-01-03 12:09:18 +0000 UTC (0+3 container statuses recorded) Jan 3 12:26:14.368: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:26:14.368: INFO: coredns-autoscaler-557ccb4c66-pj66n started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:14.368: INFO: Container autoscaler ready: true, restart count 0 Jan 3 12:26:14.368: INFO: csi-hostpathplugin-0 started at 2023-01-03 12:25:30 +0000 UTC (0+7 container statuses recorded) Jan 3 12:26:14.368: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container hostpath ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:26:14.368: INFO: cilium-nsj92 started at 2023-01-03 12:09:18 +0000 UTC (1+1 container statuses recorded) Jan 3 12:26:14.368: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:26:14.368: INFO: coredns-867df8f45c-4fzzr started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:14.368: INFO: Container coredns ready: true, restart count 0 Jan 3 12:26:14.368: INFO: csi-hostpathplugin-0 started at 2023-01-03 12:17:59 +0000 UTC (0+7 container statuses recorded) Jan 3 12:26:14.368: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container hostpath ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:26:14.368: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:26:14.368: INFO: inline-volume-tester-sqtq6 started at 2023-01-03 12:17:59 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:14.368: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 3 12:26:14.368: INFO: hostexec-ip-172-20-48-181.ap-northeast-2.compute.internal-8jlk2 started at 2023-01-03 12:26:04 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:14.368: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:26:14.992: INFO: Latency metrics for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:26:14.992: INFO: Logging node info for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:26:15.167: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-50-77.ap-northeast-2.compute.internal 8fb0fd08-c4e3-467d-a3ed-803fd4fc6cc5 23244 0 2023-01-03 12:07:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-50-77.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09592d5deddfe8924"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-03 12:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-03 12:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-03 12:07:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:08:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-03 12:09:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-09592d5deddfe8924,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3892264960 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3787407360 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:25:49 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:25:49 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:25:49 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:25:49 +0000 UTC,LastTransitionTime:2023-01-03 12:08:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.50.77,},NodeAddress{Type:ExternalIP,Address:15.165.77.221,},NodeAddress{Type:Hostname,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-15-165-77-221.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a153b3c5c2f63ca65051344522649,SystemUUID:ec2a153b-3c5c-2f63-ca65-051344522649,BootID:ef540d65-debc-49b7-93e8-d55cfe4956fd,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.15 registry.k8s.io/kube-apiserver-amd64:v1.23.15],SizeBytes:136583630,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.15 registry.k8s.io/kube-controller-manager-amd64:v1.23.15],SizeBytes:126389044,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.15 registry.k8s.io/kube-scheduler-amd64:v1.23.15],SizeBytes:54864177,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.1],SizeBytes:42982346,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.1],SizeBytes:42804933,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:26802430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.1],SizeBytes:4967349,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:26:15.167: INFO: Logging kubelet events for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:26:15.345: INFO: Logging pods the kubelet thinks is on node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:26:15.526: INFO: dns-controller-867784b75c-fs862 started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:15.526: INFO: Container dns-controller ready: true, restart count 0 Jan 3 12:26:15.526: INFO: kube-controller-manager-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:15.526: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 3 12:26:15.526: INFO: kube-scheduler-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:15.526: INFO: Container kube-scheduler ready: true, restart count 0 Jan 3 12:26:15.526: INFO: etcd-manager-events-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:15.526: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:26:15.526: INFO: etcd-manager-main-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:15.526: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:26:15.526: INFO: cilium-operator-d84d55876-jlw9m started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:15.526: INFO: Container cilium-operator ready: true, restart count 1 Jan 3 12:26:15.526: INFO: ebs-csi-controller-74ccd5888c-qh2jn started at 2023-01-03 12:07:55 +0000 UTC (0+5 container statuses recorded) Jan 3 12:26:15.526: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:26:15.526: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:26:15.526: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:26:15.526: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:26:15.526: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:26:15.526: INFO: kube-apiserver-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+2 container statuses recorded) Jan 3 12:26:15.526: INFO: Container healthcheck ready: true, restart count 0 Jan 3 12:26:15.526: INFO: Container kube-apiserver ready: true, restart count 1 Jan 3 12:26:15.526: INFO: ebs-csi-node-5hfnh started at 2023-01-03 12:07:53 +0000 UTC (0+3 container statuses recorded) Jan 3 12:26:15.526: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:26:15.526: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:26:15.526: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:26:15.526: INFO: cilium-jrcck started at 2023-01-03 12:07:53 +0000 UTC (1+1 container statuses recorded) Jan 3 12:26:15.526: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:26:15.526: INFO: Container cilium-agent ready: true, restart count 1 Jan 3 12:26:15.526: INFO: kops-controller-54wzq started at 2023-01-03 12:07:54 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:15.526: INFO: Container kops-controller ready: true, restart count 0 Jan 3 12:26:16.128: INFO: Latency metrics for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:26:16.128: INFO: Logging node info for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:26:16.303: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-52-37.ap-northeast-2.compute.internal aad66781-b33c-418b-b9d9-bf279890bb2f 23905 0 2023-01-03 12:09:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-52-37.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-060b35b9149f1ba66"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-060b35b9149f1ba66,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:26:09 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:26:09 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:26:09 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:26:09 +0000 UTC,LastTransitionTime:2023-01-03 12:09:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.52.37,},NodeAddress{Type:ExternalIP,Address:3.38.101.72,},NodeAddress{Type:Hostname,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-38-101-72.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2e82df6bf35f54d2aca0b4e9167917,SystemUUID:ec2e82df-6bf3-5f54-d2ac-a0b4e9167917,BootID:53771566-a9e2-4e41-a8e2-f140d3f619b9,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:24316368,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0f447687f1c061b01],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0f447687f1c061b01,DevicePath:,},},Config:nil,},} Jan 3 12:26:16.304: INFO: Logging kubelet events for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:26:16.481: INFO: Logging pods the kubelet thinks is on node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:26:16.661: INFO: hostexec-ip-172-20-52-37.ap-northeast-2.compute.internal-w6pkq started at 2023-01-03 12:25:30 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:16.661: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:26:16.661: INFO: hostexec-ip-172-20-52-37.ap-northeast-2.compute.internal-9d9wp started at 2023-01-03 12:26:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:16.661: INFO: Container agnhost-container ready: false, restart count 0 Jan 3 12:26:16.661: INFO: deployment-8d545c96d-b459z started at <nil> (0+0 container statuses recorded) Jan 3 12:26:16.661: INFO: pod-projected-secrets-379c6334-23c6-4e36-bf7c-85e2a403ceb6 started at <nil> (0+0 container statuses recorded) Jan 3 12:26:16.661: INFO: pod-configmaps-249bf4e3-6150-4559-b946-f7817108806f started at 2023-01-03 12:25:43 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:16.661: INFO: Container env-test ready: false, restart count 0 Jan 3 12:26:16.661: INFO: ebs-csi-node-qg9wn started at 2023-01-03 12:09:19 +0000 UTC (0+3 container statuses recorded) Jan 3 12:26:16.661: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:26:16.661: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:26:16.661: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:26:16.661: INFO: cilium-8n4td started at 2023-01-03 12:09:19 +0000 UTC (1+1 container statuses recorded) Jan 3 12:26:16.661: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:26:16.661: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:26:16.661: INFO: httpd started at 2023-01-03 12:26:07 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:16.661: INFO: Container httpd ready: false, restart count 0 Jan 3 12:26:16.661: INFO: deployment-7c658794b9-hb5t4 started at <nil> (0+0 container statuses recorded) Jan 3 12:26:16.661: INFO: affinity-nodeport-transition-fzclf started at 2023-01-03 12:25:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:16.661: INFO: Container affinity-nodeport-transition ready: true, restart count 0 Jan 3 12:26:16.661: INFO: rc-test-4sbmb started at 2023-01-03 12:25:45 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:16.661: INFO: Container rc-test ready: true, restart count 0 Jan 3 12:26:16.661: INFO: hostpathsymlink-client started at 2023-01-03 12:26:05 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:16.661: INFO: Container hostpathsymlink-client ready: true, restart count 0 Jan 3 12:26:16.661: INFO: test-ss-0 started at 2023-01-03 12:23:08 +0000 UTC (0+1 container statuses recorded) Jan 3 12:26:16.661: INFO: Container webserver ready: true, restart count 0 Jan 3 12:26:16.661: INFO: pod-subpath-test-preprovisionedpv-dvdl started at 2023-01-03 12:25:41 +0000 UTC (1+1 container statuses recorded) Jan 3 12:26:16.661: INFO: Init container init-volume-preprovisionedpv-dvdl ready: true, restart count 0 Jan 3 12:26:16.661: INFO: Container test-container-subpath-preprovisionedpv-dvdl ready: true, restart count 0 Jan 3 12:26:17.304: INFO: Latency metrics for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:26:17.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "fsgroupchangepolicy-8728" for this suite.
Find pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67 mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\s\[Privileged\:ClusterAdmin\]\sshould\sbe\sable\sto\sdeny\scustom\sresource\screation\,\supdate\sand\sdeletion\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 Jan 3 12:23:21.126: waiting for the deployment status valid%!(EXTRA string=k8s.gcr.io/e2e-test-images/agnhost:2.39, string=sample-webhook-deployment, string=webhook-6587) Unexpected error: <*errors.errorString | 0xc0029a6e90>: { s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-webhook-deployment-6c69dbd86b\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}", } error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:824from junit_03.xml
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]","total":-1,"completed":5,"skipped":17,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 12:18:17.769: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 3 12:18:20.773: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:22.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:24.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:26.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:28.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:30.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:32.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:34.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:36.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:38.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:40.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:42.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:44.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:46.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:48.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:50.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:52.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:54.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:56.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:18:58.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:00.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:02.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:04.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:06.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:08.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:10.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:12.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:14.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:16.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:18.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:20.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:22.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:24.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:26.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:28.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:30.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:32.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:34.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:36.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:38.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:40.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:42.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:44.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:46.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:48.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:50.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:52.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:54.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:56.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:19:58.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:00.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:02.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:04.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:06.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:08.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:10.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:12.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:14.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:16.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:18.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:20.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:22.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:24.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:26.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:28.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:30.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:32.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:34.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:36.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:38.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:40.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:42.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:44.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:46.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:48.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:50.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:52.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:54.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:56.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:20:58.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:00.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:02.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:04.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:06.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:08.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:10.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:12.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:14.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:16.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:18.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:20.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:22.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:24.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:26.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:28.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:30.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:32.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:34.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:36.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:38.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:40.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:42.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:44.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:46.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:48.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:50.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:52.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:54.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:56.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:21:58.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:00.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:02.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:04.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:06.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:08.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:10.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:12.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:14.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:16.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:18.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:20.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:22.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:24.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:26.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:28.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:30.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:32.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:34.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:36.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:38.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:40.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:42.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:44.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:46.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:48.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:50.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:52.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:54.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:56.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:22:58.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:23:00.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:23:02.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:23:04.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:23:06.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:23:08.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:23:10.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:23:12.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:23:14.950: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:23:16.951: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:23:18.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:23:20.949: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:23:21.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 3 12:23:21.126: FAIL: waiting for the deployment status valid%!(EXTRA string=k8s.gcr.io/e2e-test-images/agnhost:2.39, string=sample-webhook-deployment, string=webhook-6587) Unexpected error: <*errors.errorString | 0xc0029a6e90>: { s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"sample-webhook-deployment-6c69dbd86b\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}", } error waiting for deployment "sample-webhook-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 3, 12, 18, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.deployWebhookAndService(0xc000dfa2c0, {0xc001ab0210, 0x27}, 0xc0000b6c30, 0x20fb, 0x20fc) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:824 +0xe7d k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:99 +0x22e k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0004a9fb0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000cad040, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "webhook-6587". �[1mSTEP�[0m: Found 13 events. Jan 3 12:23:21.304: INFO: At 2023-01-03 12:18:20 +0000 UTC - event for sample-webhook-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-webhook-deployment-6c69dbd86b to 1 Jan 3 12:23:21.304: INFO: At 2023-01-03 12:18:20 +0000 UTC - event for sample-webhook-deployment-6c69dbd86b: {replicaset-controller } SuccessfulCreate: Created pod: sample-webhook-deployment-6c69dbd86b-h895w Jan 3 12:23:21.304: INFO: At 2023-01-03 12:18:20 +0000 UTC - event for sample-webhook-deployment-6c69dbd86b-h895w: {default-scheduler } Scheduled: Successfully assigned webhook-6587/sample-webhook-deployment-6c69dbd86b-h895w to ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:23:21.304: INFO: At 2023-01-03 12:18:21 +0000 UTC - event for sample-webhook-deployment-6c69dbd86b-h895w: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9d5959d2a10e74b2a0eb4f91eb5eecdaa93dab776493d6a316973ed06bdd8351": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:23:21.304: INFO: At 2023-01-03 12:18:33 +0000 UTC - event for sample-webhook-deployment-6c69dbd86b-h895w: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "66b3c1f39276b2ea79eaa192f0bd1a24a8172ae71aecfdfd42983d88f5a9d165": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:23:21.304: INFO: At 2023-01-03 12:18:46 +0000 UTC - event for sample-webhook-deployment-6c69dbd86b-h895w: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "bd1a3a24b6be729a50b24832be63042ec4a6fa40c3c82a5eca50dcd58d896aa2": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:23:21.304: INFO: At 2023-01-03 12:18:57 +0000 UTC - event for sample-webhook-deployment-6c69dbd86b-h895w: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b6552239ddb4b414248fe39886567791592f61f9463e6220a18bc0da10e89299": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:23:21.304: INFO: At 2023-01-03 12:19:08 +0000 UTC - event for sample-webhook-deployment-6c69dbd86b-h895w: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0598b477bae6bc9549091bb0c414983a5d102ad4da2ea466bfd3c917c71abb71": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:23:21.304: INFO: At 2023-01-03 12:19:20 +0000 UTC - event for sample-webhook-deployment-6c69dbd86b-h895w: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "beb5fa17d48d0dccbb79c66807866bebea5a3aa331ef10959298486f4171b709": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:23:21.304: INFO: At 2023-01-03 12:19:33 +0000 UTC - event for sample-webhook-deployment-6c69dbd86b-h895w: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "12a56598174a35e834410adb7f2cb12123bb8d8eb9e00877f91ba0c1bbf7ac68": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:23:21.304: INFO: At 2023-01-03 12:19:48 +0000 UTC - event for sample-webhook-deployment-6c69dbd86b-h895w: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "bbc1dbc36f36b0b3701947cd3da7969bc713e58d90df6d5fa212279b6914578c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:23:21.304: INFO: At 2023-01-03 12:20:02 +0000 UTC - event for sample-webhook-deployment-6c69dbd86b-h895w: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6cbfc0ee55405f5a50cfaa94299e47375507dfddb41aee4dcee065d305579488": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:23:21.304: INFO: At 2023-01-03 12:20:14 +0000 UTC - event for sample-webhook-deployment-6c69dbd86b-h895w: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7f33575403fb976bef79abfbf6f4ba4f21110995aa9de991b55ac95486ea5733": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:23:21.480: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 12:23:21.480: INFO: sample-webhook-deployment-6c69dbd86b-h895w ip-172-20-52-37.ap-northeast-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:18:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:18:20 +0000 UTC ContainersNotReady containers with unready status: [sample-webhook]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:18:20 +0000 UTC ContainersNotReady containers with unready status: [sample-webhook]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:18:20 +0000 UTC }] Jan 3 12:23:21.480: INFO: Jan 3 12:23:21.658: INFO: Unable to fetch webhook-6587/sample-webhook-deployment-6c69dbd86b-h895w/sample-webhook logs: the server rejected our request for an unknown reason (get pods sample-webhook-deployment-6c69dbd86b-h895w) Jan 3 12:23:21.835: INFO: Logging node info for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:23:22.012: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-33-54.ap-northeast-2.compute.internal feeb853a-f938-421c-a48c-593d753497df 19679 0 2023-01-03 12:09:24 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-33-54.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-33-54.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02f9cef67ede2f5b0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-03 12:17:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2023-01-03 12:17:46 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {e2e.test Update v1 2023-01-03 12:17:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-02f9cef67ede2f5b0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:47 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:47 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:47 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:19:47 +0000 UTC,LastTransitionTime:2023-01-03 12:09:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.33.54,},NodeAddress{Type:ExternalIP,Address:43.201.108.232,},NodeAddress{Type:Hostname,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-108-232.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2b83476f0693e43ae0a06bf0db9bb4,SystemUUID:ec2b8347-6f06-93e4-3ae0-a06bf0db9bb4,BootID:2182b644-e5a3-4e7d-a07b-8b550578833d,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e kubernetes.io/csi/ebs.csi.aws.com^vol-00f36f534313dc59d kubernetes.io/csi/ebs.csi.aws.com^vol-0111a5b18e9f3e9cf kubernetes.io/csi/ebs.csi.aws.com^vol-0932e71ac82c64430 kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0111a5b18e9f3e9cf,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-00f36f534313dc59d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0932e71ac82c64430,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe,DevicePath:,},},Config:nil,},} Jan 3 12:23:22.012: INFO: Logging kubelet events for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:23:22.192: INFO: Logging pods the kubelet thinks is on node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:23:22.380: INFO: suspend-false-to-true-whr7v started at 2023-01-03 12:19:04 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container c ready: true, restart count 0 Jan 3 12:23:22.380: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2g27g started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:22.380: INFO: ebs-csi-node-lpbdv started at 2023-01-03 12:09:25 +0000 UTC (0+3 container statuses recorded) Jan 3 12:23:22.380: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:23:22.380: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:22.380: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:23:22.380: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h84k7 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:22.380: INFO: metadata-volume-3c79ef34-79e3-40b9-b8bb-4bd9cdece4ce started at 2023-01-03 12:18:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container client-container ready: false, restart count 0 Jan 3 12:23:22.380: INFO: suspend-false-to-true-ssg9g started at 2023-01-03 12:19:04 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container c ready: false, restart count 0 Jan 3 12:23:22.380: INFO: hostexec-ip-172-20-33-54.ap-northeast-2.compute.internal-wgxlb started at 2023-01-03 12:19:06 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:23:22.380: INFO: pod-subpath-test-preprovisionedpv-rdzh started at 2023-01-03 12:19:26 +0000 UTC (1+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Init container init-volume-preprovisionedpv-rdzh ready: false, restart count 0 Jan 3 12:23:22.380: INFO: Container test-container-subpath-preprovisionedpv-rdzh ready: false, restart count 0 Jan 3 12:23:22.380: INFO: pod-b7e8ee5d-ad53-4ca3-be9a-35d952ae3d5e started at 2023-01-03 12:19:44 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:23:22.380: INFO: netserver-0 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container webserver ready: false, restart count 0 Jan 3 12:23:22.380: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4rndx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:22.380: INFO: webserver-deployment-5d9fdcc779-bn7jl started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:22.380: INFO: webserver-deployment-5d9fdcc779-nq7v8 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:22.380: INFO: webserver-deployment-5d9fdcc779-6b7jn started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:22.380: INFO: inline-volume-tester-nch9r started at 2023-01-03 12:18:41 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:23:22.380: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jtqtg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:22.380: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-m9mmt started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:22.380: INFO: pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521 started at 2023-01-03 12:19:30 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container env-test ready: false, restart count 0 Jan 3 12:23:22.380: INFO: netserver-0 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container webserver ready: false, restart count 0 Jan 3 12:23:22.380: INFO: cilium-zp2v2 started at 2023-01-03 12:09:25 +0000 UTC (1+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:23:22.380: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:23:22.380: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-dkdq8 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:22.380: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-8wctd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:22.380: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vplwx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:22.380: INFO: pode29dd966-b8d7-4a8b-959e-281433e89dbd started at 2023-01-03 12:22:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container nginx ready: false, restart count 0 Jan 3 12:23:22.380: INFO: pod-handle-http-request started at 2023-01-03 12:19:03 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container agnhost-container ready: false, restart count 0 Jan 3 12:23:22.380: INFO: pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67 started at 2023-01-03 12:18:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:23:22.380: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4fwrt started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:22.380: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2hpz5 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:22.380: INFO: csi-mockplugin-0 started at 2023-01-03 12:17:52 +0000 UTC (0+3 container statuses recorded) Jan 3 12:23:22.380: INFO: Container csi-provisioner ready: false, restart count 0 Jan 3 12:23:22.380: INFO: Container driver-registrar ready: false, restart count 0 Jan 3 12:23:22.380: INFO: Container mock ready: false, restart count 0 Jan 3 12:23:22.380: INFO: csi-mockplugin-attacher-0 started at 2023-01-03 12:17:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:22.380: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:23:22.989: INFO: Latency metrics for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:23:22.989: INFO: Logging node info for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:23:23.165: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-52.ap-northeast-2.compute.internal c3f1ba3a-309d-47d4-9106-f4b4ecf80ce1 19624 0 2023-01-03 12:09:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-52.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-37-52.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0188365058f7426fb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-03 12:17:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-03 12:19:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0188365058f7426fb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.52,},NodeAddress{Type:ExternalIP,Address:54.180.156.67,},NodeAddress{Type:Hostname,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-180-156-67.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2beb620baecd9cbda3e3db4fc66fe2,SystemUUID:ec2beb62-0bae-cd9c-bda3-e3db4fc66fe2,BootID:e8a83e85-3d73-4732-8b1e-2d93bbf7f6bc,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:23:23.166: INFO: Logging kubelet events for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:23:23.345: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:23:23.529: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-7k7mx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:23.529: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-pjqlp started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:23.529: INFO: hostexec-ip-172-20-37-52.ap-northeast-2.compute.internal-5jxz7 started at 2023-01-03 12:20:49 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:23:23.529: INFO: webserver-deployment-5d9fdcc779-4tszx started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:23.529: INFO: cilium-v6smb started at 2023-01-03 12:09:20 +0000 UTC (1+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:23:23.529: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:23:23.529: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-kkt4f started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:23.529: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jzf8m started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:23.529: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-w42zg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:23.529: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-l9xkv started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:23.529: INFO: ebs-csi-node-fkxkq started at 2023-01-03 12:09:20 +0000 UTC (0+3 container statuses recorded) Jan 3 12:23:23.529: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:23:23.529: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:23.529: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:23:23.529: INFO: pod-31fd5f45-887c-4042-895a-ab4f7b50670f started at 2023-01-03 12:20:38 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:23:23.529: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-g8zzx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:23.529: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-d9qh7 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:23.529: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vrhrc started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:23.529: INFO: pod-subpath-test-preprovisionedpv-4ngh started at 2023-01-03 12:21:10 +0000 UTC (2+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Init container init-volume-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:23:23.529: INFO: Init container test-init-subpath-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:23:23.529: INFO: Container test-container-subpath-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:23:23.529: INFO: coredns-867df8f45c-js4mj started at 2023-01-03 12:10:00 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container coredns ready: true, restart count 0 Jan 3 12:23:23.529: INFO: webserver-deployment-5d9fdcc779-htl5s started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:23.529: INFO: hostpath-symlink-prep-provisioning-1614 started at 2023-01-03 12:18:57 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container init-volume-provisioning-1614 ready: false, restart count 0 Jan 3 12:23:23.529: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-j8lbc started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:23.529: INFO: netserver-1 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container webserver ready: false, restart count 0 Jan 3 12:23:23.529: INFO: netserver-1 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container webserver ready: true, restart count 0 Jan 3 12:23:23.529: INFO: hostexec-ip-172-20-37-52.ap-northeast-2.compute.internal-w96g7 started at 2023-01-03 12:20:33 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:23.529: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:23:24.164: INFO: Latency metrics for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:23:24.164: INFO: Logging node info for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:23:24.340: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-48-181.ap-northeast-2.compute.internal 96573984-3972-4958-a21d-91e5b7179ec3 20641 0 2023-01-03 12:09:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-48-181.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-48-181.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2507":"ip-172-20-48-181.ap-northeast-2.compute.internal","ebs.csi.aws.com":"i-0c02313085f6ea916"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:17:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {e2e.test Update v1 2023-01-03 12:17:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0c02313085f6ea916,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.48.181,},NodeAddress{Type:ExternalIP,Address:43.201.60.170,},NodeAddress{Type:Hostname,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-60-170.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2eb3762b49570c6e3d8607a5e516da,SystemUUID:ec2eb376-2b49-570c-6e3d-8607a5e516da,BootID:bd8a3c9d-ca15-400a-bb20-3a3e2aa04c7f,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:23:24.341: INFO: Logging kubelet events for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:23:24.520: INFO: Logging pods the kubelet thinks is on node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:23:24.707: INFO: coredns-autoscaler-557ccb4c66-pj66n started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container autoscaler ready: true, restart count 0 Jan 3 12:23:24.707: INFO: hostexec-ip-172-20-48-181.ap-northeast-2.compute.internal-wl8zb started at 2023-01-03 12:20:11 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:23:24.707: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-nkxzd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:24.707: INFO: webserver-deployment-5d9fdcc779-g9bm5 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:24.707: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-v2c2j started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:24.707: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-p2j42 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: false, restart count 0 Jan 3 12:23:24.707: INFO: csi-hostpathplugin-0 started at 2023-01-03 12:17:59 +0000 UTC (0+7 container statuses recorded) Jan 3 12:23:24.707: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:23:24.707: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:23:24.707: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:23:24.707: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 3 12:23:24.707: INFO: Container hostpath ready: true, restart count 0 Jan 3 12:23:24.707: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:24.707: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:23:24.707: INFO: hostexec-ip-172-20-48-181.ap-northeast-2.compute.internal-82t7w started at 2023-01-03 12:18:39 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:23:24.707: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4jx8k started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:24.707: INFO: inline-volume-tester-sqtq6 started at 2023-01-03 12:17:59 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 3 12:23:24.707: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-czr66 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: false, restart count 0 Jan 3 12:23:24.707: INFO: netserver-2 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container webserver ready: false, restart count 0 Jan 3 12:23:24.707: INFO: pod-87dc35fb-f86b-4b15-8720-1d77c6521c5b started at 2023-01-03 12:20:15 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:23:24.707: INFO: cilium-nsj92 started at 2023-01-03 12:09:18 +0000 UTC (1+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:23:24.707: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:23:24.707: INFO: coredns-867df8f45c-4fzzr started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container coredns ready: true, restart count 0 Jan 3 12:23:24.707: INFO: pod-afc94ed4-6866-4f67-b600-8edf5a99e2be started at 2023-01-03 12:18:46 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:23:24.707: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-gpxqr started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:24.707: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-c9vtv started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:24.707: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-ldphj started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:24.707: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h2gmg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:24.707: INFO: netserver-2 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container webserver ready: false, restart count 0 Jan 3 12:23:24.707: INFO: ebs-csi-node-5drk2 started at 2023-01-03 12:09:18 +0000 UTC (0+3 container statuses recorded) Jan 3 12:23:24.707: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:23:24.707: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:24.707: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:23:24.707: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-b9462 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:24.707: INFO: webserver-deployment-5d9fdcc779-f9rvm started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:24.707: INFO: inline-volume-tester2-6gs8h started at 2023-01-03 12:18:18 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:24.707: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:23:25.339: INFO: Latency metrics for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:23:25.339: INFO: Logging node info for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:23:25.515: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-50-77.ap-northeast-2.compute.internal 8fb0fd08-c4e3-467d-a3ed-803fd4fc6cc5 20092 0 2023-01-03 12:07:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-50-77.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09592d5deddfe8924"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-03 12:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-03 12:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-03 12:07:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:08:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-03 12:09:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-09592d5deddfe8924,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3892264960 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3787407360 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:08:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.50.77,},NodeAddress{Type:ExternalIP,Address:15.165.77.221,},NodeAddress{Type:Hostname,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-15-165-77-221.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a153b3c5c2f63ca65051344522649,SystemUUID:ec2a153b-3c5c-2f63-ca65-051344522649,BootID:ef540d65-debc-49b7-93e8-d55cfe4956fd,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.15 registry.k8s.io/kube-apiserver-amd64:v1.23.15],SizeBytes:136583630,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.15 registry.k8s.io/kube-controller-manager-amd64:v1.23.15],SizeBytes:126389044,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.15 registry.k8s.io/kube-scheduler-amd64:v1.23.15],SizeBytes:54864177,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.1],SizeBytes:42982346,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.1],SizeBytes:42804933,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:26802430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.1],SizeBytes:4967349,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:23:25.515: INFO: Logging kubelet events for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:23:25.696: INFO: Logging pods the kubelet thinks is on node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:23:25.878: INFO: kube-apiserver-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+2 container statuses recorded) Jan 3 12:23:25.878: INFO: Container healthcheck ready: true, restart count 0 Jan 3 12:23:25.878: INFO: Container kube-apiserver ready: true, restart count 1 Jan 3 12:23:25.878: INFO: ebs-csi-node-5hfnh started at 2023-01-03 12:07:53 +0000 UTC (0+3 container statuses recorded) Jan 3 12:23:25.878: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:23:25.878: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:25.878: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:23:25.878: INFO: cilium-jrcck started at 2023-01-03 12:07:53 +0000 UTC (1+1 container statuses recorded) Jan 3 12:23:25.878: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:23:25.878: INFO: Container cilium-agent ready: true, restart count 1 Jan 3 12:23:25.878: INFO: kops-controller-54wzq started at 2023-01-03 12:07:54 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:25.878: INFO: Container kops-controller ready: true, restart count 0 Jan 3 12:23:25.878: INFO: dns-controller-867784b75c-fs862 started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:25.878: INFO: Container dns-controller ready: true, restart count 0 Jan 3 12:23:25.878: INFO: kube-controller-manager-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:25.878: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 3 12:23:25.878: INFO: kube-scheduler-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:25.878: INFO: Container kube-scheduler ready: true, restart count 0 Jan 3 12:23:25.878: INFO: etcd-manager-events-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:25.878: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:23:25.878: INFO: etcd-manager-main-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:25.878: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:23:25.878: INFO: cilium-operator-d84d55876-jlw9m started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:25.878: INFO: Container cilium-operator ready: true, restart count 1 Jan 3 12:23:25.878: INFO: ebs-csi-controller-74ccd5888c-qh2jn started at 2023-01-03 12:07:55 +0000 UTC (0+5 container statuses recorded) Jan 3 12:23:25.878: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:23:25.878: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:23:25.878: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:23:25.879: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:23:25.879: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:26.461: INFO: Latency metrics for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:23:26.461: INFO: Logging node info for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:23:26.638: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-52-37.ap-northeast-2.compute.internal aad66781-b33c-418b-b9d9-bf279890bb2f 20086 0 2023-01-03 12:09:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-52-37.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-060b35b9149f1ba66"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-03 12:17:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-060b35b9149f1ba66,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.52.37,},NodeAddress{Type:ExternalIP,Address:3.38.101.72,},NodeAddress{Type:Hostname,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-38-101-72.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2e82df6bf35f54d2aca0b4e9167917,SystemUUID:ec2e82df-6bf3-5f54-d2ac-a0b4e9167917,BootID:53771566-a9e2-4e41-a8e2-f140d3f619b9,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0432a2d2bd3b2dc32 kubernetes.io/csi/ebs.csi.aws.com^vol-04dfa5d672d819a83],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0432a2d2bd3b2dc32,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-04dfa5d672d819a83,DevicePath:,},},Config:nil,},} Jan 3 12:23:26.638: INFO: Logging kubelet events for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:23:26.817: INFO: Logging pods the kubelet thinks is on node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:23:27.001: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-fdrvf started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:27.001: INFO: inline-volume-tester-mmvnp started at 2023-01-03 12:20:40 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:23:27.001: INFO: webserver-deployment-5d9fdcc779-zvnv5 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:27.001: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2kwc4 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:27.001: INFO: test-ss-0 started at 2023-01-03 12:23:08 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container webserver ready: false, restart count 0 Jan 3 12:23:27.001: INFO: sample-webhook-deployment-6c69dbd86b-h895w started at 2023-01-03 12:18:20 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container sample-webhook ready: false, restart count 0 Jan 3 12:23:27.001: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-25x24 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:27.001: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgkkd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:27.001: INFO: sample-apiserver-deployment-7cdc9f5bf7-tmj6p started at 2023-01-03 12:23:11 +0000 UTC (0+2 container statuses recorded) Jan 3 12:23:27.001: INFO: Container etcd ready: false, restart count 0 Jan 3 12:23:27.001: INFO: Container sample-apiserver ready: false, restart count 0 Jan 3 12:23:27.001: INFO: netserver-3 started at 2023-01-03 12:20:13 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container webserver ready: true, restart count 0 Jan 3 12:23:27.001: INFO: sample-crd-conversion-webhook-deployment-67c86bcf4b-bkg7c started at 2023-01-03 12:23:19 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container sample-crd-conversion-webhook ready: false, restart count 0 Jan 3 12:23:27.001: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:27.001: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vs6ts started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:27.001: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-xxg5r started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:27.001: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-k28rp started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:27.001: INFO: netserver-3 started at 2023-01-03 12:20:03 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container webserver ready: true, restart count 0 Jan 3 12:23:27.001: INFO: ebs-csi-node-qg9wn started at 2023-01-03 12:09:19 +0000 UTC (0+3 container statuses recorded) Jan 3 12:23:27.001: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:23:27.001: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:27.001: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:23:27.001: INFO: cilium-8n4td started at 2023-01-03 12:09:19 +0000 UTC (1+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:23:27.001: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:23:27.001: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgpzl started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:27.001: INFO: webserver-deployment-5d9fdcc779-tws27 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:27.001: INFO: webserver-deployment-5d9fdcc779-nwlm7 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:27.001: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-5h749 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:27.001: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:27.613: INFO: Latency metrics for node ip-172-20-52-37.ap-northeast-2.compute.internal �[1mSTEP�[0m: Collecting events from namespace "webhook-6587-markers". �[1mSTEP�[0m: Found 0 events. Jan 3 12:23:27.965: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 12:23:27.965: INFO: Jan 3 12:23:28.142: INFO: Logging node info for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:23:28.318: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-33-54.ap-northeast-2.compute.internal feeb853a-f938-421c-a48c-593d753497df 20908 0 2023-01-03 12:09:24 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-33-54.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-33-54.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02f9cef67ede2f5b0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-03 12:17:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2023-01-03 12:17:46 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {e2e.test Update v1 2023-01-03 12:17:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-02f9cef67ede2f5b0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:23:21 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:23:21 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:23:21 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:23:21 +0000 UTC,LastTransitionTime:2023-01-03 12:09:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.33.54,},NodeAddress{Type:ExternalIP,Address:43.201.108.232,},NodeAddress{Type:Hostname,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-108-232.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2b83476f0693e43ae0a06bf0db9bb4,SystemUUID:ec2b8347-6f06-93e4-3ae0-a06bf0db9bb4,BootID:2182b644-e5a3-4e7d-a07b-8b550578833d,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e kubernetes.io/csi/ebs.csi.aws.com^vol-00f36f534313dc59d kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-00f36f534313dc59d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e,DevicePath:,},},Config:nil,},} Jan 3 12:23:28.319: INFO: Logging kubelet events for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:23:28.497: INFO: Logging pods the kubelet thinks is on node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:23:28.682: INFO: webserver-deployment-5d9fdcc779-bn7jl started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:28.682: INFO: webserver-deployment-5d9fdcc779-nq7v8 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:28.682: INFO: webserver-deployment-5d9fdcc779-6b7jn started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:28.682: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4rndx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:28.682: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-m9mmt started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:28.682: INFO: pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521 started at 2023-01-03 12:19:30 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container env-test ready: false, restart count 0 Jan 3 12:23:28.682: INFO: netserver-0 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container webserver ready: false, restart count 0 Jan 3 12:23:28.682: INFO: inline-volume-tester-nch9r started at 2023-01-03 12:18:41 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:23:28.682: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jtqtg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:28.682: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-8wctd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:28.682: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vplwx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:28.682: INFO: pode29dd966-b8d7-4a8b-959e-281433e89dbd started at 2023-01-03 12:22:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container nginx ready: false, restart count 0 Jan 3 12:23:28.682: INFO: cilium-zp2v2 started at 2023-01-03 12:09:25 +0000 UTC (1+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:23:28.682: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:23:28.682: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-dkdq8 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:28.682: INFO: pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67 started at 2023-01-03 12:18:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:23:28.682: INFO: pod-handle-http-request started at 2023-01-03 12:19:03 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container agnhost-container ready: false, restart count 0 Jan 3 12:23:28.682: INFO: csi-mockplugin-0 started at 2023-01-03 12:17:52 +0000 UTC (0+3 container statuses recorded) Jan 3 12:23:28.682: INFO: Container csi-provisioner ready: false, restart count 0 Jan 3 12:23:28.682: INFO: Container driver-registrar ready: false, restart count 0 Jan 3 12:23:28.682: INFO: Container mock ready: false, restart count 0 Jan 3 12:23:28.682: INFO: csi-mockplugin-attacher-0 started at 2023-01-03 12:17:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:23:28.682: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4fwrt started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:28.682: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2hpz5 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:28.682: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2g27g started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:28.682: INFO: suspend-false-to-true-whr7v started at 2023-01-03 12:19:04 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container c ready: true, restart count 0 Jan 3 12:23:28.682: INFO: metadata-volume-3c79ef34-79e3-40b9-b8bb-4bd9cdece4ce started at 2023-01-03 12:18:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container client-container ready: false, restart count 0 Jan 3 12:23:28.682: INFO: suspend-false-to-true-ssg9g started at 2023-01-03 12:19:04 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container c ready: false, restart count 0 Jan 3 12:23:28.682: INFO: hostexec-ip-172-20-33-54.ap-northeast-2.compute.internal-wgxlb started at 2023-01-03 12:19:06 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:23:28.682: INFO: pod-subpath-test-preprovisionedpv-rdzh started at 2023-01-03 12:19:26 +0000 UTC (1+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Init container init-volume-preprovisionedpv-rdzh ready: false, restart count 0 Jan 3 12:23:28.682: INFO: Container test-container-subpath-preprovisionedpv-rdzh ready: false, restart count 0 Jan 3 12:23:28.682: INFO: pod-b7e8ee5d-ad53-4ca3-be9a-35d952ae3d5e started at 2023-01-03 12:19:44 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:23:28.682: INFO: netserver-0 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container webserver ready: false, restart count 0 Jan 3 12:23:28.682: INFO: ebs-csi-node-lpbdv started at 2023-01-03 12:09:25 +0000 UTC (0+3 container statuses recorded) Jan 3 12:23:28.682: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:23:28.682: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:28.682: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:23:28.682: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h84k7 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:28.682: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:29.271: INFO: Latency metrics for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:23:29.272: INFO: Logging node info for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:23:29.448: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-52.ap-northeast-2.compute.internal c3f1ba3a-309d-47d4-9106-f4b4ecf80ce1 19624 0 2023-01-03 12:09:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-52.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-37-52.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0188365058f7426fb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-03 12:17:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-03 12:19:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0188365058f7426fb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.52,},NodeAddress{Type:ExternalIP,Address:54.180.156.67,},NodeAddress{Type:Hostname,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-180-156-67.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2beb620baecd9cbda3e3db4fc66fe2,SystemUUID:ec2beb62-0bae-cd9c-bda3-e3db4fc66fe2,BootID:e8a83e85-3d73-4732-8b1e-2d93bbf7f6bc,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:23:29.449: INFO: Logging kubelet events for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:23:29.627: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:23:29.811: INFO: webserver-deployment-5d9fdcc779-4tszx started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:29.811: INFO: cilium-v6smb started at 2023-01-03 12:09:20 +0000 UTC (1+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:23:29.811: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:23:29.811: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-kkt4f started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:29.811: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jzf8m started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:29.811: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-7k7mx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:29.811: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-pjqlp started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:29.811: INFO: hostexec-ip-172-20-37-52.ap-northeast-2.compute.internal-5jxz7 started at 2023-01-03 12:20:49 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:23:29.811: INFO: ebs-csi-node-fkxkq started at 2023-01-03 12:09:20 +0000 UTC (0+3 container statuses recorded) Jan 3 12:23:29.811: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:23:29.811: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:29.811: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:23:29.811: INFO: pod-31fd5f45-887c-4042-895a-ab4f7b50670f started at 2023-01-03 12:20:38 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:23:29.811: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-g8zzx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:29.811: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-w42zg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:29.811: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-l9xkv started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:29.811: INFO: coredns-867df8f45c-js4mj started at 2023-01-03 12:10:00 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container coredns ready: true, restart count 0 Jan 3 12:23:29.811: INFO: webserver-deployment-5d9fdcc779-htl5s started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:29.811: INFO: hostpath-symlink-prep-provisioning-1614 started at 2023-01-03 12:18:57 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container init-volume-provisioning-1614 ready: false, restart count 0 Jan 3 12:23:29.811: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-d9qh7 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:29.811: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vrhrc started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:29.811: INFO: pod-subpath-test-preprovisionedpv-4ngh started at 2023-01-03 12:21:10 +0000 UTC (2+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Init container init-volume-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:23:29.811: INFO: Init container test-init-subpath-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:23:29.811: INFO: Container test-container-subpath-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:23:29.811: INFO: netserver-1 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.811: INFO: Container webserver ready: false, restart count 0 Jan 3 12:23:29.812: INFO: netserver-1 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.812: INFO: Container webserver ready: true, restart count 0 Jan 3 12:23:29.812: INFO: hostexec-ip-172-20-37-52.ap-northeast-2.compute.internal-w96g7 started at 2023-01-03 12:20:33 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.812: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:23:29.812: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-j8lbc started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:29.812: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:30.416: INFO: Latency metrics for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:23:30.416: INFO: Logging node info for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:23:30.592: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-48-181.ap-northeast-2.compute.internal 96573984-3972-4958-a21d-91e5b7179ec3 20641 0 2023-01-03 12:09:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-48-181.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-48-181.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2507":"ip-172-20-48-181.ap-northeast-2.compute.internal","ebs.csi.aws.com":"i-0c02313085f6ea916"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:17:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {e2e.test Update v1 2023-01-03 12:17:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0c02313085f6ea916,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.48.181,},NodeAddress{Type:ExternalIP,Address:43.201.60.170,},NodeAddress{Type:Hostname,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-60-170.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2eb3762b49570c6e3d8607a5e516da,SystemUUID:ec2eb376-2b49-570c-6e3d-8607a5e516da,BootID:bd8a3c9d-ca15-400a-bb20-3a3e2aa04c7f,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:23:30.593: INFO: Logging kubelet events for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:23:30.771: INFO: Logging pods the kubelet thinks is on node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:23:30.956: INFO: coredns-autoscaler-557ccb4c66-pj66n started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container autoscaler ready: true, restart count 0 Jan 3 12:23:30.956: INFO: hostexec-ip-172-20-48-181.ap-northeast-2.compute.internal-wl8zb started at 2023-01-03 12:20:11 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:23:30.956: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-nkxzd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:30.956: INFO: webserver-deployment-5d9fdcc779-g9bm5 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:30.956: INFO: hostexec-ip-172-20-48-181.ap-northeast-2.compute.internal-82t7w started at 2023-01-03 12:18:39 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:23:30.956: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4jx8k started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:30.956: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-v2c2j started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:30.956: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-p2j42 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: false, restart count 0 Jan 3 12:23:30.956: INFO: csi-hostpathplugin-0 started at 2023-01-03 12:17:59 +0000 UTC (0+7 container statuses recorded) Jan 3 12:23:30.956: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:23:30.956: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:23:30.956: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:23:30.956: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 3 12:23:30.956: INFO: Container hostpath ready: true, restart count 0 Jan 3 12:23:30.956: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:30.956: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:23:30.956: INFO: inline-volume-tester-sqtq6 started at 2023-01-03 12:17:59 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 3 12:23:30.956: INFO: cilium-nsj92 started at 2023-01-03 12:09:18 +0000 UTC (1+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:23:30.956: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:23:30.956: INFO: coredns-867df8f45c-4fzzr started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container coredns ready: true, restart count 0 Jan 3 12:23:30.956: INFO: pod-afc94ed4-6866-4f67-b600-8edf5a99e2be started at 2023-01-03 12:18:46 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:23:30.956: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-gpxqr started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:30.956: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-czr66 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: false, restart count 0 Jan 3 12:23:30.956: INFO: netserver-2 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container webserver ready: false, restart count 0 Jan 3 12:23:30.956: INFO: pod-87dc35fb-f86b-4b15-8720-1d77c6521c5b started at 2023-01-03 12:20:15 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:23:30.956: INFO: ebs-csi-node-5drk2 started at 2023-01-03 12:09:18 +0000 UTC (0+3 container statuses recorded) Jan 3 12:23:30.956: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:23:30.956: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:30.956: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:23:30.956: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-b9462 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:30.956: INFO: webserver-deployment-5d9fdcc779-f9rvm started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:30.956: INFO: inline-volume-tester2-6gs8h started at 2023-01-03 12:18:18 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:23:30.956: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-c9vtv started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:30.956: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-ldphj started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:30.956: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h2gmg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:30.956: INFO: netserver-2 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:30.956: INFO: Container webserver ready: false, restart count 0 Jan 3 12:23:31.589: INFO: Latency metrics for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:23:31.589: INFO: Logging node info for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:23:31.766: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-50-77.ap-northeast-2.compute.internal 8fb0fd08-c4e3-467d-a3ed-803fd4fc6cc5 20092 0 2023-01-03 12:07:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-50-77.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09592d5deddfe8924"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-03 12:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-03 12:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-03 12:07:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:08:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-03 12:09:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-09592d5deddfe8924,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3892264960 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3787407360 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:08:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.50.77,},NodeAddress{Type:ExternalIP,Address:15.165.77.221,},NodeAddress{Type:Hostname,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-15-165-77-221.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a153b3c5c2f63ca65051344522649,SystemUUID:ec2a153b-3c5c-2f63-ca65-051344522649,BootID:ef540d65-debc-49b7-93e8-d55cfe4956fd,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.15 registry.k8s.io/kube-apiserver-amd64:v1.23.15],SizeBytes:136583630,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.15 registry.k8s.io/kube-controller-manager-amd64:v1.23.15],SizeBytes:126389044,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.15 registry.k8s.io/kube-scheduler-amd64:v1.23.15],SizeBytes:54864177,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.1],SizeBytes:42982346,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.1],SizeBytes:42804933,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:26802430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.1],SizeBytes:4967349,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:23:31.767: INFO: Logging kubelet events for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:23:31.945: INFO: Logging pods the kubelet thinks is on node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:23:32.127: INFO: ebs-csi-node-5hfnh started at 2023-01-03 12:07:53 +0000 UTC (0+3 container statuses recorded) Jan 3 12:23:32.127: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:23:32.127: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:32.127: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:23:32.127: INFO: cilium-jrcck started at 2023-01-03 12:07:53 +0000 UTC (1+1 container statuses recorded) Jan 3 12:23:32.127: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:23:32.127: INFO: Container cilium-agent ready: true, restart count 1 Jan 3 12:23:32.127: INFO: kops-controller-54wzq started at 2023-01-03 12:07:54 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:32.127: INFO: Container kops-controller ready: true, restart count 0 Jan 3 12:23:32.127: INFO: kube-apiserver-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+2 container statuses recorded) Jan 3 12:23:32.127: INFO: Container healthcheck ready: true, restart count 0 Jan 3 12:23:32.127: INFO: Container kube-apiserver ready: true, restart count 1 Jan 3 12:23:32.127: INFO: kube-scheduler-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:32.127: INFO: Container kube-scheduler ready: true, restart count 0 Jan 3 12:23:32.127: INFO: etcd-manager-events-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:32.127: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:23:32.127: INFO: etcd-manager-main-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:32.127: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:23:32.127: INFO: cilium-operator-d84d55876-jlw9m started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:32.127: INFO: Container cilium-operator ready: true, restart count 1 Jan 3 12:23:32.127: INFO: ebs-csi-controller-74ccd5888c-qh2jn started at 2023-01-03 12:07:55 +0000 UTC (0+5 container statuses recorded) Jan 3 12:23:32.127: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:23:32.127: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:23:32.127: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:23:32.127: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:23:32.127: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:32.127: INFO: dns-controller-867784b75c-fs862 started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:32.127: INFO: Container dns-controller ready: true, restart count 0 Jan 3 12:23:32.127: INFO: kube-controller-manager-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:32.127: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 3 12:23:32.707: INFO: Latency metrics for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:23:32.707: INFO: Logging node info for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:23:32.883: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-52-37.ap-northeast-2.compute.internal aad66781-b33c-418b-b9d9-bf279890bb2f 20086 0 2023-01-03 12:09:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-52-37.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-060b35b9149f1ba66"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-03 12:17:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-060b35b9149f1ba66,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.52.37,},NodeAddress{Type:ExternalIP,Address:3.38.101.72,},NodeAddress{Type:Hostname,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-38-101-72.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2e82df6bf35f54d2aca0b4e9167917,SystemUUID:ec2e82df-6bf3-5f54-d2ac-a0b4e9167917,BootID:53771566-a9e2-4e41-a8e2-f140d3f619b9,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0432a2d2bd3b2dc32 kubernetes.io/csi/ebs.csi.aws.com^vol-04dfa5d672d819a83],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0432a2d2bd3b2dc32,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-04dfa5d672d819a83,DevicePath:,},},Config:nil,},} Jan 3 12:23:32.884: INFO: Logging kubelet events for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:23:33.062: INFO: Logging pods the kubelet thinks is on node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:23:33.245: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-5h749 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:33.245: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-fdrvf started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:33.245: INFO: inline-volume-tester-mmvnp started at 2023-01-03 12:20:40 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:23:33.245: INFO: webserver-deployment-5d9fdcc779-zvnv5 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:33.245: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2kwc4 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:33.245: INFO: test-ss-0 started at 2023-01-03 12:23:08 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container webserver ready: false, restart count 0 Jan 3 12:23:33.245: INFO: sample-webhook-deployment-6c69dbd86b-h895w started at 2023-01-03 12:18:20 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container sample-webhook ready: false, restart count 0 Jan 3 12:23:33.245: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-25x24 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:33.245: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgkkd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:33.245: INFO: sample-apiserver-deployment-7cdc9f5bf7-tmj6p started at 2023-01-03 12:23:11 +0000 UTC (0+2 container statuses recorded) Jan 3 12:23:33.245: INFO: Container etcd ready: false, restart count 0 Jan 3 12:23:33.245: INFO: Container sample-apiserver ready: false, restart count 0 Jan 3 12:23:33.245: INFO: sample-crd-conversion-webhook-deployment-67c86bcf4b-bkg7c started at 2023-01-03 12:23:19 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container sample-crd-conversion-webhook ready: false, restart count 0 Jan 3 12:23:33.245: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:33.245: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vs6ts started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:33.245: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-xxg5r started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:33.245: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-k28rp started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:33.245: INFO: netserver-3 started at 2023-01-03 12:20:03 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container webserver ready: true, restart count 0 Jan 3 12:23:33.245: INFO: netserver-3 started at 2023-01-03 12:20:13 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container webserver ready: true, restart count 0 Jan 3 12:23:33.245: INFO: ebs-csi-node-qg9wn started at 2023-01-03 12:09:19 +0000 UTC (0+3 container statuses recorded) Jan 3 12:23:33.245: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:23:33.245: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:23:33.245: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:23:33.245: INFO: cilium-8n4td started at 2023-01-03 12:09:19 +0000 UTC (1+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:23:33.245: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:23:33.245: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgpzl started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:23:33.245: INFO: webserver-deployment-5d9fdcc779-tws27 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:33.245: INFO: webserver-deployment-5d9fdcc779-nwlm7 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:23:33.245: INFO: Container httpd ready: false, restart count 0 Jan 3 12:23:33.833: INFO: Latency metrics for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:23:33.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6587" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6587-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\sbe\sable\sto\shandle\slarge\srequests\:\sudp$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:453 Jan 3 12:25:13.676: Unexpected error: <*errors.errorString | 0xc0002482d0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:858from junit_14.xml
[BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 12:20:10.814: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename nettest �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to handle large requests: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:453 �[1mSTEP�[0m: Performing setup for networking test in namespace nettest-7930 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 3 12:20:12.058: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 3 12:20:13.320: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:15.497: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:17.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:19.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:21.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:23.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:25.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:27.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:29.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:31.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:33.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:35.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:37.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:39.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:41.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:43.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:45.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:47.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:49.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:51.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:53.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:55.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:57.497: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:59.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:01.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:03.502: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:05.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:07.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:09.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:11.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:13.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:15.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:17.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:19.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:21.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:23.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:25.503: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:27.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:29.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:31.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:33.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:35.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:37.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:39.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:41.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:43.497: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:45.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:47.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:49.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:51.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:53.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:55.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:57.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:59.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:01.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:03.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:05.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:07.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:09.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:11.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:13.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:15.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:17.502: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:19.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:21.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:23.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:25.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:27.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:29.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:31.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:33.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:35.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:37.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:39.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:41.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:43.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:45.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:47.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:49.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:51.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:53.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:55.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:57.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:59.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:01.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:03.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:05.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:07.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:09.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:11.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:13.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:15.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:17.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:19.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:21.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:23.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:25.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:27.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:29.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:31.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:33.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:35.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:37.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:39.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:41.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:43.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:45.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:47.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:49.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:51.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:53.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:55.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:57.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:59.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:01.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:03.501: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:05.510: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:07.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:09.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:11.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:13.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:15.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:17.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:19.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:21.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:23.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:25.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:27.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:29.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:31.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:33.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:35.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:37.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:39.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:41.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:43.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:45.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:47.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:49.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:51.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:53.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:55.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:57.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:59.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:25:01.500: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:25:03.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:25:05.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:25:07.499: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:25:09.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:25:11.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:25:13.498: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:25:13.676: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:25:13.676: FAIL: Unexpected error: <*errors.errorString | 0xc0002482d0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc000ec0460, {0x70da7f5, 0x9}, 0xc0039f88d0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:858 +0x1d4 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc000ec0460, 0xc0039f88d0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:760 +0x5b k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc000ec0460, 0x34) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:775 +0x3b k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0014611e0, {0x0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:129 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.6.19() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:454 +0x36 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000848b60, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "nettest-7930". �[1mSTEP�[0m: Found 34 events. Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:12 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-7930/netserver-0 to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:12 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-7930/netserver-1 to ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:12 +0000 UTC - event for netserver-2: {default-scheduler } Scheduled: Successfully assigned nettest-7930/netserver-2 to ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:13 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b602d55c26dfc97addd258b4aeec5d5b0fd4954e9de0d50a4cdba69615634e73": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:13 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c497821ed0c0d80ceecb990ef9847ce5ccd4621082b70facc08e41ba06495689": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:13 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6e45d05cab4d66f739844f6443965219d54e6c8d0d6164892120fa374a3be7f9": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:13 +0000 UTC - event for netserver-3: {default-scheduler } Scheduled: Successfully assigned nettest-7930/netserver-3 to ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:13 +0000 UTC - event for netserver-3: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "25ca60cff12e23b1788cd7ec241146161d0577ebe7a51c2290b764a30f2e42c2": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:24 +0000 UTC - event for netserver-3: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "de623e2556b35a9fc5fa4a805d2ba580557a660eaede7159b6cf4f75740f0bb8": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:25 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "18d40d1a407fd620d70a6129d3f3c2e89dfc125f2ab39a3b75643f4b1bf7aac4": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:27 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Started: Started container webserver Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:27 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Created: Created container webserver Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:27 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:29 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f27e8e2ffd82bbdfef96fa29b4068ec901fcf11333fec85f37c83a647f93995f": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:36 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1c7eb8fd981f839fa640dbde268c96c76a99ade6d4be8bd0e37d2f6c425a044a": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:37 +0000 UTC - event for netserver-3: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "bcfd9cb2ea7cc32a983cd07ebad46ec52c11ba5f3b614451616eb9a2aedbddd3": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:40 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "36c63cd1b6c3b9f34011421da350999db3db5d1cb217c9ff42b9833894af3625": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:51 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "fa48d3d62a6c7f0d09437c4e85011234910865821e9c1b65a9ad5b93a3b032b5": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:52 +0000 UTC - event for netserver-3: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:52 +0000 UTC - event for netserver-3: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Created: Created container webserver Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:53 +0000 UTC - event for netserver-3: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Started: Started container webserver Jan 3 12:25:13.863: INFO: At 2023-01-03 12:20:54 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "665542a5a676ef864968e8f3327786575af1e5bcdd5dbe189de2fc341c994186": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:21:05 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a3ec75ff276584fc5e3b66c105f77cb407386bb637ae6876e1dcf08a88e3baf6": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:21:06 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "28a124d580a1db82e204297f5ec212a70fc51136c2453fea914accb809279783": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:21:17 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0445d47149d36fc0cff1af5dda3bf4146caeba4796d3ec83a8e8d1f54d6b4bdf": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:21:21 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "327e5844a47a01cf138bf679c042f232f94b185a75f86eec8c7afb9b86d27f41": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:21:30 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ce4ee9ad55d3d0705bd7d323f6ac2be175d9ba0322cd3df186dcef33318be265": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:21:35 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9ffa34bae910f647f8f3ca62dd99489b2ec187e2a4be75f1b2b624659bbc2d99": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:21:43 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "bd1827742400271d2d2ed6185aad234a8dfddc7822e0ca5f075dbdb314dae239": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:21:50 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ee2cd2740094c38fea06b6e8c96a3875db825072d724c46d5fce1a9ca930f67b": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:21:58 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a92cecd164729ad8aae5196f949589a21a433a87bebf333ff50ca5e1215c9ae6": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:22:05 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "03d1f1d8e9d4025677f6d3dcb0dd926a24731d2dc3dd2f853749d4f2fe820185": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:22:13 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8b58699c2d41f4e07601e8a66d31815affe79e08b86af6bbeb9cf57fd6897963": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:13.863: INFO: At 2023-01-03 12:22:19 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8df8c062310f3a8b9e1df115732cb26ed5adaad6c4391709f1ce7b44fbe4d0b2": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:14.041: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 12:25:14.041: INFO: netserver-0 ip-172-20-33-54.ap-northeast-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:12 +0000 UTC }] Jan 3 12:25:14.041: INFO: netserver-1 ip-172-20-37-52.ap-northeast-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:43 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:43 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:12 +0000 UTC }] Jan 3 12:25:14.041: INFO: netserver-2 ip-172-20-48-181.ap-northeast-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:12 +0000 UTC }] Jan 3 12:25:14.041: INFO: netserver-3 ip-172-20-52-37.ap-northeast-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:13 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:21:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:21:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:13 +0000 UTC }] Jan 3 12:25:14.041: INFO: Jan 3 12:25:14.220: INFO: Unable to fetch nettest-7930/netserver-0/webserver logs: the server rejected our request for an unknown reason (get pods netserver-0) Jan 3 12:25:14.582: INFO: Unable to fetch nettest-7930/netserver-2/webserver logs: the server rejected our request for an unknown reason (get pods netserver-2) Jan 3 12:25:14.942: INFO: Logging node info for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:15.121: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-33-54.ap-northeast-2.compute.internal feeb853a-f938-421c-a48c-593d753497df 21709 0 2023-01-03 12:09:24 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-33-54.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-33-54.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02f9cef67ede2f5b0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-03 12:17:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2023-01-03 12:17:46 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-02f9cef67ede2f5b0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:12 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:12 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:12 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:24:12 +0000 UTC,LastTransitionTime:2023-01-03 12:09:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.33.54,},NodeAddress{Type:ExternalIP,Address:43.201.108.232,},NodeAddress{Type:Hostname,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-108-232.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2b83476f0693e43ae0a06bf0db9bb4,SystemUUID:ec2b8347-6f06-93e4-3ae0-a06bf0db9bb4,BootID:2182b644-e5a3-4e7d-a07b-8b550578833d,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe,DevicePath:,},},Config:nil,},} Jan 3 12:25:15.121: INFO: Logging kubelet events for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:15.302: INFO: Logging pods the kubelet thinks is on node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:15.488: INFO: cilium-zp2v2 started at 2023-01-03 12:09:25 +0000 UTC (1+1 container statuses recorded) Jan 3 12:25:15.488: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:25:15.488: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:25:15.488: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-dkdq8 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.488: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:15.488: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-8wctd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.488: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:15.488: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vplwx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.488: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:15.488: INFO: pode29dd966-b8d7-4a8b-959e-281433e89dbd started at 2023-01-03 12:22:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.488: INFO: Container nginx ready: false, restart count 0 Jan 3 12:25:15.488: INFO: pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67 started at 2023-01-03 12:18:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.488: INFO: Container write-pod ready: true, restart count 0 Jan 3 12:25:15.488: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4fwrt started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.488: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:15.488: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2hpz5 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.488: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:15.489: INFO: netserver-0 started at 2023-01-03 12:24:01 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:15.489: INFO: pod-ff0cde31-130b-41cc-82ed-a17554f19830 started at 2023-01-03 12:24:03 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container test-container ready: false, restart count 0 Jan 3 12:25:15.489: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2g27g started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:15.489: INFO: ebs-csi-node-lpbdv started at 2023-01-03 12:09:25 +0000 UTC (0+3 container statuses recorded) Jan 3 12:25:15.489: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:25:15.489: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:15.489: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:25:15.489: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h84k7 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:15.489: INFO: netserver-0 started at 2023-01-03 12:25:13 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:15.489: INFO: netserver-0 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:15.489: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4rndx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:15.489: INFO: sample-webhook-deployment-6c69dbd86b-2bgfs started at 2023-01-03 12:24:40 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container sample-webhook ready: false, restart count 0 Jan 3 12:25:15.489: INFO: webserver-deployment-5d9fdcc779-bn7jl started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:15.489: INFO: webserver-deployment-5d9fdcc779-nq7v8 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:15.489: INFO: webserver-deployment-5d9fdcc779-6b7jn started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:15.489: INFO: inline-volume-tester-nch9r started at 2023-01-03 12:18:41 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 3 12:25:15.489: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jtqtg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:15.489: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-m9mmt started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:15.489: INFO: affinity-nodeport-timeout-225bm started at 2023-01-03 12:24:42 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container affinity-nodeport-timeout ready: false, restart count 0 Jan 3 12:25:15.489: INFO: netserver-0 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:15.489: INFO: busybox-113a8ec1-8769-4eed-b07b-3de46bcbab1a started at 2023-01-03 12:23:54 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container busybox ready: false, restart count 0 Jan 3 12:25:15.489: INFO: pod-subpath-test-secret-pk62 started at 2023-01-03 12:24:42 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:15.489: INFO: Container test-container-subpath-secret-pk62 ready: false, restart count 0 Jan 3 12:25:16.136: INFO: Latency metrics for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:16.137: INFO: Logging node info for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:16.315: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-52.ap-northeast-2.compute.internal c3f1ba3a-309d-47d4-9106-f4b4ecf80ce1 21697 0 2023-01-03 12:09:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-52.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-37-52.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0188365058f7426fb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-03 12:19:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0188365058f7426fb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:58 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:58 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:58 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:24:58 +0000 UTC,LastTransitionTime:2023-01-03 12:09:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.52,},NodeAddress{Type:ExternalIP,Address:54.180.156.67,},NodeAddress{Type:Hostname,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-180-156-67.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2beb620baecd9cbda3e3db4fc66fe2,SystemUUID:ec2beb62-0bae-cd9c-bda3-e3db4fc66fe2,BootID:e8a83e85-3d73-4732-8b1e-2d93bbf7f6bc,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:25:16.315: INFO: Logging kubelet events for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:16.499: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:16.703: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-7k7mx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:16.703: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-pjqlp started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:16.703: INFO: hostexec-ip-172-20-37-52.ap-northeast-2.compute.internal-5jxz7 started at 2023-01-03 12:20:49 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:25:16.703: INFO: webserver-deployment-5d9fdcc779-4tszx started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:16.703: INFO: netserver-1 started at 2023-01-03 12:25:13 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:16.703: INFO: cilium-v6smb started at 2023-01-03 12:09:20 +0000 UTC (1+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:25:16.703: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:25:16.703: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-kkt4f started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:16.703: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jzf8m started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:16.703: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-w42zg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:16.703: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-l9xkv started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:16.703: INFO: ebs-csi-node-fkxkq started at 2023-01-03 12:09:20 +0000 UTC (0+3 container statuses recorded) Jan 3 12:25:16.703: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:25:16.703: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:16.703: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:25:16.703: INFO: pod-31fd5f45-887c-4042-895a-ab4f7b50670f started at 2023-01-03 12:20:38 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:25:16.703: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-g8zzx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:16.703: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-d9qh7 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:16.703: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vrhrc started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:16.703: INFO: pod-subpath-test-preprovisionedpv-4ngh started at 2023-01-03 12:21:10 +0000 UTC (2+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Init container init-volume-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:25:16.703: INFO: Init container test-init-subpath-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:25:16.703: INFO: Container test-container-subpath-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:25:16.703: INFO: coredns-867df8f45c-js4mj started at 2023-01-03 12:10:00 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container coredns ready: true, restart count 0 Jan 3 12:25:16.703: INFO: webserver-deployment-5d9fdcc779-htl5s started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:16.703: INFO: hostexec-ip-172-20-37-52.ap-northeast-2.compute.internal-v249m started at 2023-01-03 12:25:13 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:25:16.703: INFO: netserver-1 started at 2023-01-03 12:24:01 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:16.703: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-j8lbc started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:16.703: INFO: netserver-1 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:16.703: INFO: netserver-1 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container webserver ready: true, restart count 0 Jan 3 12:25:16.703: INFO: hostexec-ip-172-20-37-52.ap-northeast-2.compute.internal-w96g7 started at 2023-01-03 12:20:33 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:16.703: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:25:17.340: INFO: Latency metrics for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:17.340: INFO: Logging node info for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:17.520: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-48-181.ap-northeast-2.compute.internal 96573984-3972-4958-a21d-91e5b7179ec3 21699 0 2023-01-03 12:09:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-48-181.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-48-181.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2507":"ip-172-20-48-181.ap-northeast-2.compute.internal","ebs.csi.aws.com":"i-0c02313085f6ea916"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:17:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0c02313085f6ea916,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.48.181,},NodeAddress{Type:ExternalIP,Address:43.201.60.170,},NodeAddress{Type:Hostname,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-60-170.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2eb3762b49570c6e3d8607a5e516da,SystemUUID:ec2eb376-2b49-570c-6e3d-8607a5e516da,BootID:bd8a3c9d-ca15-400a-bb20-3a3e2aa04c7f,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:25:17.521: INFO: Logging kubelet events for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:17.704: INFO: Logging pods the kubelet thinks is on node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:17.893: INFO: coredns-autoscaler-557ccb4c66-pj66n started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container autoscaler ready: true, restart count 0 Jan 3 12:25:17.893: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-nkxzd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:17.893: INFO: exec-volume-test-preprovisionedpv-6fjc started at 2023-01-03 12:24:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container exec-container-preprovisionedpv-6fjc ready: false, restart count 0 Jan 3 12:25:17.893: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-p2j42 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: false, restart count 0 Jan 3 12:25:17.893: INFO: hostexec-ip-172-20-48-181.ap-northeast-2.compute.internal-k9mr9 started at 2023-01-03 12:24:41 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:25:17.893: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4jx8k started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:17.893: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-v2c2j started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:17.893: INFO: cilium-nsj92 started at 2023-01-03 12:09:18 +0000 UTC (1+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:25:17.893: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:25:17.893: INFO: coredns-867df8f45c-4fzzr started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container coredns ready: true, restart count 0 Jan 3 12:25:17.893: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-ldphj started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:17.893: INFO: hostexec-ip-172-20-48-181.ap-northeast-2.compute.internal-wl8zb started at 2023-01-03 12:20:11 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:25:17.893: INFO: webserver-deployment-5d9fdcc779-g9bm5 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:17.893: INFO: netserver-2 started at 2023-01-03 12:24:01 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:17.893: INFO: csi-hostpathplugin-0 started at 2023-01-03 12:17:59 +0000 UTC (0+7 container statuses recorded) Jan 3 12:25:17.893: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:25:17.893: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:25:17.893: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:25:17.893: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 3 12:25:17.893: INFO: Container hostpath ready: true, restart count 0 Jan 3 12:25:17.893: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:17.893: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:25:17.893: INFO: inline-volume-tester-sqtq6 started at 2023-01-03 12:17:59 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 3 12:25:17.893: INFO: netserver-2 started at 2023-01-03 12:25:13 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:17.893: INFO: pod-87dc35fb-f86b-4b15-8720-1d77c6521c5b started at 2023-01-03 12:20:15 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:25:17.893: INFO: affinity-nodeport-timeout-s5fsw started at 2023-01-03 12:24:42 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container affinity-nodeport-timeout ready: false, restart count 0 Jan 3 12:25:17.893: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-gpxqr started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:17.893: INFO: netserver-2 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:17.893: INFO: inline-volume-tester2-6gs8h started at 2023-01-03 12:18:18 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:25:17.893: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-c9vtv started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:17.893: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h2gmg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:17.893: INFO: netserver-2 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:17.893: INFO: ebs-csi-node-5drk2 started at 2023-01-03 12:09:18 +0000 UTC (0+3 container statuses recorded) Jan 3 12:25:17.893: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:25:17.893: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:17.893: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:25:17.893: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-b9462 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: false, restart count 0 Jan 3 12:25:17.893: INFO: webserver-deployment-5d9fdcc779-f9rvm started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:17.893: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:18.545: INFO: Latency metrics for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:18.545: INFO: Logging node info for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:25:18.723: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-50-77.ap-northeast-2.compute.internal 8fb0fd08-c4e3-467d-a3ed-803fd4fc6cc5 20092 0 2023-01-03 12:07:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-50-77.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09592d5deddfe8924"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-03 12:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-03 12:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-03 12:07:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:08:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-03 12:09:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-09592d5deddfe8924,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3892264960 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3787407360 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:08:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.50.77,},NodeAddress{Type:ExternalIP,Address:15.165.77.221,},NodeAddress{Type:Hostname,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-15-165-77-221.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a153b3c5c2f63ca65051344522649,SystemUUID:ec2a153b-3c5c-2f63-ca65-051344522649,BootID:ef540d65-debc-49b7-93e8-d55cfe4956fd,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.15 registry.k8s.io/kube-apiserver-amd64:v1.23.15],SizeBytes:136583630,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.15 registry.k8s.io/kube-controller-manager-amd64:v1.23.15],SizeBytes:126389044,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.15 registry.k8s.io/kube-scheduler-amd64:v1.23.15],SizeBytes:54864177,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.1],SizeBytes:42982346,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.1],SizeBytes:42804933,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:26802430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.1],SizeBytes:4967349,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:25:18.724: INFO: Logging kubelet events for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:25:18.910: INFO: Logging pods the kubelet thinks is on node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:25:19.093: INFO: etcd-manager-main-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:19.093: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:25:19.093: INFO: cilium-operator-d84d55876-jlw9m started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:19.093: INFO: Container cilium-operator ready: true, restart count 1 Jan 3 12:25:19.093: INFO: ebs-csi-controller-74ccd5888c-qh2jn started at 2023-01-03 12:07:55 +0000 UTC (0+5 container statuses recorded) Jan 3 12:25:19.093: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:25:19.093: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:25:19.093: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:25:19.093: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:25:19.093: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:19.093: INFO: dns-controller-867784b75c-fs862 started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:19.093: INFO: Container dns-controller ready: true, restart count 0 Jan 3 12:25:19.093: INFO: kube-controller-manager-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:19.093: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 3 12:25:19.093: INFO: kube-scheduler-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:19.093: INFO: Container kube-scheduler ready: true, restart count 0 Jan 3 12:25:19.093: INFO: etcd-manager-events-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:19.093: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:25:19.093: INFO: kops-controller-54wzq started at 2023-01-03 12:07:54 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:19.093: INFO: Container kops-controller ready: true, restart count 0 Jan 3 12:25:19.093: INFO: kube-apiserver-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+2 container statuses recorded) Jan 3 12:25:19.093: INFO: Container healthcheck ready: true, restart count 0 Jan 3 12:25:19.093: INFO: Container kube-apiserver ready: true, restart count 1 Jan 3 12:25:19.093: INFO: ebs-csi-node-5hfnh started at 2023-01-03 12:07:53 +0000 UTC (0+3 container statuses recorded) Jan 3 12:25:19.093: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:25:19.093: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:19.093: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:25:19.093: INFO: cilium-jrcck started at 2023-01-03 12:07:53 +0000 UTC (1+1 container statuses recorded) Jan 3 12:25:19.093: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:25:19.093: INFO: Container cilium-agent ready: true, restart count 1 Jan 3 12:25:19.706: INFO: Latency metrics for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:25:19.706: INFO: Logging node info for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:19.884: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-52-37.ap-northeast-2.compute.internal aad66781-b33c-418b-b9d9-bf279890bb2f 21702 0 2023-01-03 12:09:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-52-37.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-060b35b9149f1ba66"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-060b35b9149f1ba66,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:47 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:47 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:47 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:24:47 +0000 UTC,LastTransitionTime:2023-01-03 12:09:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.52.37,},NodeAddress{Type:ExternalIP,Address:3.38.101.72,},NodeAddress{Type:Hostname,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-38-101-72.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2e82df6bf35f54d2aca0b4e9167917,SystemUUID:ec2e82df-6bf3-5f54-d2ac-a0b4e9167917,BootID:53771566-a9e2-4e41-a8e2-f140d3f619b9,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0432a2d2bd3b2dc32 kubernetes.io/csi/ebs.csi.aws.com^vol-04dfa5d672d819a83 kubernetes.io/csi/ebs.csi.aws.com^vol-0f447687f1c061b01],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-04dfa5d672d819a83,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0f447687f1c061b01,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0432a2d2bd3b2dc32,DevicePath:,},},Config:nil,},} Jan 3 12:25:19.885: INFO: Logging kubelet events for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:20.065: INFO: Logging pods the kubelet thinks is on node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:20.253: INFO: ebs-csi-node-qg9wn started at 2023-01-03 12:09:19 +0000 UTC (0+3 container statuses recorded) Jan 3 12:25:20.253: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:25:20.253: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:20.253: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:25:20.253: INFO: cilium-8n4td started at 2023-01-03 12:09:19 +0000 UTC (1+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:25:20.253: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:25:20.253: INFO: webserver-deployment-5d9fdcc779-tws27 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:20.253: INFO: webserver-deployment-5d9fdcc779-nwlm7 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:20.253: INFO: inline-volume-tester2-xcxjr started at 2023-01-03 12:24:46 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:25:20.253: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-5h749 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:20.253: INFO: pod1 started at 2023-01-03 12:24:07 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Container agnhost-container ready: false, restart count 0 Jan 3 12:25:20.253: INFO: netserver-3 started at 2023-01-03 12:25:13 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:20.253: INFO: inline-volume-tester-mmvnp started at 2023-01-03 12:20:40 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:25:20.253: INFO: webserver-deployment-5d9fdcc779-zvnv5 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:20.253: INFO: pod-e8e32ffc-f03a-43d1-8ea3-72be400f27cc started at 2023-01-03 12:24:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Container test-container ready: false, restart count 0 Jan 3 12:25:20.253: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2kwc4 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:20.253: INFO: test-ss-0 started at 2023-01-03 12:23:08 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:20.253: INFO: affinity-nodeport-timeout-6l7c7 started at 2023-01-03 12:24:42 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Container affinity-nodeport-timeout ready: false, restart count 0 Jan 3 12:25:20.253: INFO: netserver-3 started at 2023-01-03 12:24:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.253: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:20.254: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-25x24 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.254: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:20.254: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgkkd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.254: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:20.254: INFO: sample-apiserver-deployment-7cdc9f5bf7-tmj6p started at 2023-01-03 12:23:11 +0000 UTC (0+2 container statuses recorded) Jan 3 12:25:20.254: INFO: Container etcd ready: false, restart count 0 Jan 3 12:25:20.254: INFO: Container sample-apiserver ready: false, restart count 0 Jan 3 12:25:20.254: INFO: sample-crd-conversion-webhook-deployment-67c86bcf4b-bkg7c started at 2023-01-03 12:23:19 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.254: INFO: Container sample-crd-conversion-webhook ready: false, restart count 0 Jan 3 12:25:20.254: INFO: hostpath-symlink-prep-volume-6653 started at 2023-01-03 12:24:05 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.254: INFO: Container init-volume-volume-6653 ready: false, restart count 0 Jan 3 12:25:20.254: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.254: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:20.254: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vs6ts started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.254: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:20.254: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-xxg5r started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.254: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:20.254: INFO: netserver-3 started at 2023-01-03 12:20:03 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.254: INFO: Container webserver ready: true, restart count 0 Jan 3 12:25:20.254: INFO: netserver-3 started at 2023-01-03 12:20:13 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:20.254: INFO: Container webserver ready: true, restart count 0 Jan 3 12:25:20.883: INFO: Latency metrics for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:20.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "nettest-7930" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\sfunction\sfor\sendpoint\-Service\:\sudp$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:248 Jan 3 12:25:03.635: Unexpected error: <*errors.errorString | 0xc0003462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:858from junit_05.xml
{"msg":"PASSED [sig-cli] Kubectl client Simple pod should support inline execution and attach","total":-1,"completed":22,"skipped":194,"failed":0} [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 12:20:00.793: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename nettest �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for endpoint-Service: udp /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:248 �[1mSTEP�[0m: Performing setup for networking test in namespace nettest-4525 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 3 12:20:02.032: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 3 12:20:03.278: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:05.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:07.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:09.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:11.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:13.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:15.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:17.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:19.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:21.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:23.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:25.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:27.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:29.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:31.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:33.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:35.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:37.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:39.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:41.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:43.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:45.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:47.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:49.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:51.460: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:53.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:55.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:57.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:59.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:01.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:03.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:05.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:07.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:09.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:11.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:13.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:15.459: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:17.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:19.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:21.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:23.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:25.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:27.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:29.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:31.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:33.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:35.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:37.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:39.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:41.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:43.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:45.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:47.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:49.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:51.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:53.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:55.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:57.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:59.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:01.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:03.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:05.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:07.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:09.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:11.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:13.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:15.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:17.458: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:19.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:21.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:23.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:25.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:27.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:29.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:31.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:33.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:35.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:37.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:39.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:41.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:43.459: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:45.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:47.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:49.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:51.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:53.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:55.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:57.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:59.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:01.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:03.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:05.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:07.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:09.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:11.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:13.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:15.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:17.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:19.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:21.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:23.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:25.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:27.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:29.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:31.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:33.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:35.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:37.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:39.458: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:41.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:43.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:45.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:47.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:49.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:51.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:53.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:55.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:57.459: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:59.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:01.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:03.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:05.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:07.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:09.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:11.461: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:13.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:15.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:17.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:19.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:21.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:23.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:25.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:27.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:29.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:31.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:33.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:35.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:37.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:39.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:41.459: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:43.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:45.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:47.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:49.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:51.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:53.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:55.457: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:57.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:59.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:25:01.456: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:25:03.455: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:25:03.634: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:25:03.635: FAIL: Unexpected error: <*errors.errorString | 0xc0003462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc00437c1c0, {0x70da7f5, 0x9}, 0xc003e0c810) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:858 +0x1d4 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc00437c1c0, 0xc003e0c810) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:760 +0x5b k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc00437c1c0, 0x34) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:775 +0x3b k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0014689a0, {0x0, 0x0, 0x6c}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:129 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.6.8() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:249 +0x36 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000350ea0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "nettest-4525". �[1mSTEP�[0m: Found 38 events. Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:02 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b489412f45d12c1259e8708bfb0fb04f497c319a5d93f25c8c1f39cf52ede6a1": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:02 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-4525/netserver-0 to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:02 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-4525/netserver-1 to ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:02 +0000 UTC - event for netserver-2: {default-scheduler } Scheduled: Successfully assigned nettest-4525/netserver-2 to ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:03 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ff75761988a8e39b32e41dbd45e5d929bf52e55bae32d49571996eaaff474341": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:03 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "82c9aed2f80f6ba9e7c426ab58998a6f0459f077845d7fee90fd9d85eb5953c2": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:03 +0000 UTC - event for netserver-3: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "182ef8377666d668b75169fed5f329f7dbe305a3642456bd6c807dee1db3d32c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:03 +0000 UTC - event for netserver-3: {default-scheduler } Scheduled: Successfully assigned nettest-4525/netserver-3 to ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:14 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "83dd3ffb5d672992a6ae86c765873208da96e986e6b9ef0e26e9ebfaf3a300bc": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:14 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "479868859723adf22861eb6bffb7d0ad0450532728440ba4091d3bb14993ebcc": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:15 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "660169cbb3b144492aba618bea3a0a77abf6ee354b951c974c556857e9019b43": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:16 +0000 UTC - event for netserver-3: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:16 +0000 UTC - event for netserver-3: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Created: Created container webserver Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:17 +0000 UTC - event for netserver-3: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Started: Started container webserver Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:28 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b05c84441810211f3507e24d512a1023964b1ac47a7b54dd0fa3c3da25f1b1ba": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:29 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b68b9f9730ce8ae14a24f93708571bb4ab0e73a6fb365c8c2fc27b600674d580": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:30 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "90ada075a3f88d5579f21546873ca040447793cefe1ac245084ba1b92126ff74": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:40 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "38c9a2d6e2cd436e8d33750ea0aadb1999a926ba3052d3abe0537d8816d5ce58": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:42 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0883119413838b1215bc3267f56bf725461ee2adc9d963c30be29bd40ca3ff95": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:45 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9e71babdf852f094bb8014e5817f7bbdc20873a8d5e4c49879f4edddf38bbadb": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:51 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "62760ac1dbf3c224737cc72244dda044e33de6bfdae78ca8ce07e5e08b5a1e8f": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:20:57 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "467d883cc1b39de579189b49aac5f6029a364c6556c8e7198379c8b39c3b261f": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:21:00 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d84236f4015666642f00e1edcbff540030b884b2eb0062e0670601744ec534a5": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:21:06 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5c559e9baf5b1837dd7b142088b0074d92c2bdf99e9c9ce5758389f8a95e80f9": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:21:08 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d6aea5a30e2e483fb6f557d275333bbeb1fb2a22a7c3ac67c1d397eea698f0c7": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:21:13 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "bdfe0c78a4ae82296689488d353a1e14c9cf07565fe71c784334c7ef84da010c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:21:20 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "de30e636db191e17419388c52952e8cfd87946317eca744e467f0d3fb4860245": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:21:21 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3b616d2b839cac0c0a75725a3e3775061558c428ae6175f6c0429d72db85f279": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:21:27 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "9d2b4232b04b8e0c2afcb696cbeee52c7575addd4e08985f59e4efb4f8579c08": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:21:34 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6f68d4d816848c2817dd0c35fef2d3d16dd9a69da7ce8aebb0eefa865639b9f9": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:21:34 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a96bb2fd04c6b872a857598bf08a1bd3f8cbc99c8ac28b0486168b416c894361": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:21:42 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d8308afad45d6b3081767b0b1791c62984ac4775dadb6e9db2c7b9d77635450d": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:21:47 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0dee81c9a9303c4783adcfb1622bc374a79b1eabab8e098fb85cbc8abf2172a9": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:21:49 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a965bc3d869849f036fc861b9b0075f77f36a6229b1a10632e54e38d37a82424": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:21:57 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8d9519c7b69e095a4ee6c687d059874895c5ad4db4746e04111d70f78e053d27": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:22:01 +0000 UTC - event for netserver-0: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b65b558f0c7a99a02dd771446948a018ae939e2945fd35950d6fc4f14e86b615": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:22:02 +0000 UTC - event for netserver-2: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ef44f694654474e94784ac7f6d010d1842e9ca12ef4ec9c73df650953d0ccb32": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.813: INFO: At 2023-01-03 12:22:08 +0000 UTC - event for netserver-1: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6dce3ed786f563c4aaab0b16454b488ba3afafabbe2687b101dc7097679c1f5b": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:03.991: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 12:25:03.991: INFO: netserver-0 ip-172-20-33-54.ap-northeast-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:02 +0000 UTC }] Jan 3 12:25:03.991: INFO: netserver-1 ip-172-20-37-52.ap-northeast-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:02 +0000 UTC }] Jan 3 12:25:03.991: INFO: netserver-2 ip-172-20-48-181.ap-northeast-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:02 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:02 +0000 UTC }] Jan 3 12:25:03.991: INFO: netserver-3 ip-172-20-52-37.ap-northeast-2.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:03 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:20:03 +0000 UTC }] Jan 3 12:25:03.991: INFO: Jan 3 12:25:04.169: INFO: Unable to fetch nettest-4525/netserver-0/webserver logs: the server rejected our request for an unknown reason (get pods netserver-0) Jan 3 12:25:04.347: INFO: Unable to fetch nettest-4525/netserver-1/webserver logs: the server rejected our request for an unknown reason (get pods netserver-1) Jan 3 12:25:04.525: INFO: Unable to fetch nettest-4525/netserver-2/webserver logs: the server rejected our request for an unknown reason (get pods netserver-2) Jan 3 12:25:04.882: INFO: Logging node info for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:05.059: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-33-54.ap-northeast-2.compute.internal feeb853a-f938-421c-a48c-593d753497df 21709 0 2023-01-03 12:09:24 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-33-54.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-33-54.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02f9cef67ede2f5b0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-03 12:17:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2023-01-03 12:17:46 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-02f9cef67ede2f5b0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:12 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:12 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:12 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:24:12 +0000 UTC,LastTransitionTime:2023-01-03 12:09:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.33.54,},NodeAddress{Type:ExternalIP,Address:43.201.108.232,},NodeAddress{Type:Hostname,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-108-232.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2b83476f0693e43ae0a06bf0db9bb4,SystemUUID:ec2b8347-6f06-93e4-3ae0-a06bf0db9bb4,BootID:2182b644-e5a3-4e7d-a07b-8b550578833d,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe,DevicePath:,},},Config:nil,},} Jan 3 12:25:05.060: INFO: Logging kubelet events for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:05.240: INFO: Logging pods the kubelet thinks is on node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:05.602: INFO: busybox-113a8ec1-8769-4eed-b07b-3de46bcbab1a started at 2023-01-03 12:23:54 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container busybox ready: false, restart count 0 Jan 3 12:25:05.603: INFO: pod-subpath-test-secret-pk62 started at 2023-01-03 12:24:42 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container test-container-subpath-secret-pk62 ready: false, restart count 0 Jan 3 12:25:05.603: INFO: cilium-zp2v2 started at 2023-01-03 12:09:25 +0000 UTC (1+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:25:05.603: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:25:05.603: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-dkdq8 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:05.603: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-8wctd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:05.603: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vplwx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:05.603: INFO: pode29dd966-b8d7-4a8b-959e-281433e89dbd started at 2023-01-03 12:22:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container nginx ready: false, restart count 0 Jan 3 12:25:05.603: INFO: pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67 started at 2023-01-03 12:18:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container write-pod ready: true, restart count 0 Jan 3 12:25:05.603: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4fwrt started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:05.603: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2hpz5 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:05.603: INFO: netserver-0 started at 2023-01-03 12:24:01 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:05.603: INFO: pod-ff0cde31-130b-41cc-82ed-a17554f19830 started at 2023-01-03 12:24:03 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container test-container ready: false, restart count 0 Jan 3 12:25:05.603: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2g27g started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:05.603: INFO: ebs-csi-node-lpbdv started at 2023-01-03 12:09:25 +0000 UTC (0+3 container statuses recorded) Jan 3 12:25:05.603: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:25:05.603: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:05.603: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:25:05.603: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h84k7 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:05.603: INFO: netserver-0 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:05.603: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4rndx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:05.603: INFO: sample-webhook-deployment-6c69dbd86b-2bgfs started at 2023-01-03 12:24:40 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container sample-webhook ready: false, restart count 0 Jan 3 12:25:05.603: INFO: webserver-deployment-5d9fdcc779-bn7jl started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:05.603: INFO: webserver-deployment-5d9fdcc779-nq7v8 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:05.603: INFO: webserver-deployment-5d9fdcc779-6b7jn started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:05.603: INFO: inline-volume-tester-nch9r started at 2023-01-03 12:18:41 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 3 12:25:05.603: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jtqtg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:05.603: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-m9mmt started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:05.603: INFO: affinity-nodeport-timeout-225bm started at 2023-01-03 12:24:42 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container affinity-nodeport-timeout ready: false, restart count 0 Jan 3 12:25:05.603: INFO: netserver-0 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:05.603: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:06.208: INFO: Latency metrics for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:06.208: INFO: Logging node info for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:06.386: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-52.ap-northeast-2.compute.internal c3f1ba3a-309d-47d4-9106-f4b4ecf80ce1 21697 0 2023-01-03 12:09:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-52.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-37-52.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0188365058f7426fb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-03 12:19:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0188365058f7426fb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:58 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:58 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:58 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:24:58 +0000 UTC,LastTransitionTime:2023-01-03 12:09:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.52,},NodeAddress{Type:ExternalIP,Address:54.180.156.67,},NodeAddress{Type:Hostname,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-180-156-67.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2beb620baecd9cbda3e3db4fc66fe2,SystemUUID:ec2beb62-0bae-cd9c-bda3-e3db4fc66fe2,BootID:e8a83e85-3d73-4732-8b1e-2d93bbf7f6bc,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:25:06.386: INFO: Logging kubelet events for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:06.566: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:06.924: INFO: webserver-deployment-5d9fdcc779-htl5s started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:06.924: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-d9qh7 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:06.924: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vrhrc started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:06.924: INFO: pod-subpath-test-preprovisionedpv-4ngh started at 2023-01-03 12:21:10 +0000 UTC (2+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Init container init-volume-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:25:06.924: INFO: Init container test-init-subpath-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:25:06.924: INFO: Container test-container-subpath-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:25:06.924: INFO: coredns-867df8f45c-js4mj started at 2023-01-03 12:10:00 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container coredns ready: true, restart count 0 Jan 3 12:25:06.924: INFO: netserver-1 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container webserver ready: true, restart count 0 Jan 3 12:25:06.924: INFO: hostexec-ip-172-20-37-52.ap-northeast-2.compute.internal-w96g7 started at 2023-01-03 12:20:33 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:25:06.924: INFO: netserver-1 started at 2023-01-03 12:24:01 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:06.924: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-j8lbc started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:06.924: INFO: netserver-1 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:06.924: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-kkt4f started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:06.924: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jzf8m started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:06.924: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-7k7mx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:06.924: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-pjqlp started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:06.924: INFO: hostexec-ip-172-20-37-52.ap-northeast-2.compute.internal-5jxz7 started at 2023-01-03 12:20:49 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:25:06.924: INFO: webserver-deployment-5d9fdcc779-4tszx started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:06.924: INFO: cilium-v6smb started at 2023-01-03 12:09:20 +0000 UTC (1+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:25:06.924: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:25:06.924: INFO: pod-31fd5f45-887c-4042-895a-ab4f7b50670f started at 2023-01-03 12:20:38 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:25:06.924: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-g8zzx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:06.924: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-w42zg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:06.924: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-l9xkv started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:06.924: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:06.924: INFO: ebs-csi-node-fkxkq started at 2023-01-03 12:09:20 +0000 UTC (0+3 container statuses recorded) Jan 3 12:25:06.924: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:25:06.924: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:06.924: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:25:07.562: INFO: Latency metrics for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:07.562: INFO: Logging node info for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:07.739: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-48-181.ap-northeast-2.compute.internal 96573984-3972-4958-a21d-91e5b7179ec3 21699 0 2023-01-03 12:09:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-48-181.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-48-181.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2507":"ip-172-20-48-181.ap-northeast-2.compute.internal","ebs.csi.aws.com":"i-0c02313085f6ea916"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:17:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0c02313085f6ea916,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.48.181,},NodeAddress{Type:ExternalIP,Address:43.201.60.170,},NodeAddress{Type:Hostname,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-60-170.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2eb3762b49570c6e3d8607a5e516da,SystemUUID:ec2eb376-2b49-570c-6e3d-8607a5e516da,BootID:bd8a3c9d-ca15-400a-bb20-3a3e2aa04c7f,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:25:07.740: INFO: Logging kubelet events for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:07.920: INFO: Logging pods the kubelet thinks is on node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:08.109: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-ldphj started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:08.109: INFO: hostexec-ip-172-20-48-181.ap-northeast-2.compute.internal-wl8zb started at 2023-01-03 12:20:11 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:25:08.109: INFO: webserver-deployment-5d9fdcc779-g9bm5 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:08.109: INFO: netserver-2 started at 2023-01-03 12:24:01 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:08.109: INFO: csi-hostpathplugin-0 started at 2023-01-03 12:17:59 +0000 UTC (0+7 container statuses recorded) Jan 3 12:25:08.109: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:25:08.109: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:25:08.109: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:25:08.109: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 3 12:25:08.109: INFO: Container hostpath ready: true, restart count 0 Jan 3 12:25:08.109: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:08.109: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:25:08.109: INFO: inline-volume-tester-sqtq6 started at 2023-01-03 12:17:59 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 3 12:25:08.109: INFO: affinity-nodeport-timeout-s5fsw started at 2023-01-03 12:24:42 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container affinity-nodeport-timeout ready: false, restart count 0 Jan 3 12:25:08.109: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-gpxqr started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:08.109: INFO: netserver-2 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:08.109: INFO: pod-87dc35fb-f86b-4b15-8720-1d77c6521c5b started at 2023-01-03 12:20:15 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:25:08.109: INFO: ebs-csi-node-5drk2 started at 2023-01-03 12:09:18 +0000 UTC (0+3 container statuses recorded) Jan 3 12:25:08.109: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:25:08.109: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:08.109: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:25:08.109: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-b9462 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:08.109: INFO: webserver-deployment-5d9fdcc779-f9rvm started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:08.109: INFO: inline-volume-tester2-6gs8h started at 2023-01-03 12:18:18 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:25:08.109: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-c9vtv started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:08.109: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h2gmg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:08.109: INFO: netserver-2 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:08.109: INFO: coredns-autoscaler-557ccb4c66-pj66n started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container autoscaler ready: true, restart count 0 Jan 3 12:25:08.109: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-nkxzd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:08.109: INFO: exec-volume-test-preprovisionedpv-6fjc started at 2023-01-03 12:24:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container exec-container-preprovisionedpv-6fjc ready: false, restart count 0 Jan 3 12:25:08.109: INFO: hostexec-ip-172-20-48-181.ap-northeast-2.compute.internal-k9mr9 started at 2023-01-03 12:24:41 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:25:08.109: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4jx8k started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:08.109: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-v2c2j started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:08.109: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-p2j42 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: false, restart count 0 Jan 3 12:25:08.109: INFO: cilium-nsj92 started at 2023-01-03 12:09:18 +0000 UTC (1+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:25:08.109: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:25:08.109: INFO: coredns-867df8f45c-4fzzr started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container coredns ready: true, restart count 0 Jan 3 12:25:08.109: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-czr66 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:08.109: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: false, restart count 0 Jan 3 12:25:08.739: INFO: Latency metrics for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:08.739: INFO: Logging node info for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:25:08.916: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-50-77.ap-northeast-2.compute.internal 8fb0fd08-c4e3-467d-a3ed-803fd4fc6cc5 20092 0 2023-01-03 12:07:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-50-77.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09592d5deddfe8924"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-03 12:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-03 12:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-03 12:07:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:08:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-03 12:09:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-09592d5deddfe8924,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3892264960 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3787407360 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:08:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.50.77,},NodeAddress{Type:ExternalIP,Address:15.165.77.221,},NodeAddress{Type:Hostname,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-15-165-77-221.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a153b3c5c2f63ca65051344522649,SystemUUID:ec2a153b-3c5c-2f63-ca65-051344522649,BootID:ef540d65-debc-49b7-93e8-d55cfe4956fd,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.15 registry.k8s.io/kube-apiserver-amd64:v1.23.15],SizeBytes:136583630,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.15 registry.k8s.io/kube-controller-manager-amd64:v1.23.15],SizeBytes:126389044,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.15 registry.k8s.io/kube-scheduler-amd64:v1.23.15],SizeBytes:54864177,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.1],SizeBytes:42982346,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.1],SizeBytes:42804933,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:26802430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.1],SizeBytes:4967349,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:25:08.916: INFO: Logging kubelet events for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:25:09.095: INFO: Logging pods the kubelet thinks is on node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:25:09.279: INFO: etcd-manager-main-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:09.279: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:25:09.279: INFO: cilium-operator-d84d55876-jlw9m started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:09.279: INFO: Container cilium-operator ready: true, restart count 1 Jan 3 12:25:09.279: INFO: ebs-csi-controller-74ccd5888c-qh2jn started at 2023-01-03 12:07:55 +0000 UTC (0+5 container statuses recorded) Jan 3 12:25:09.279: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:25:09.279: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:25:09.279: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:25:09.279: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:25:09.279: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:09.279: INFO: dns-controller-867784b75c-fs862 started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:09.279: INFO: Container dns-controller ready: true, restart count 0 Jan 3 12:25:09.279: INFO: kube-controller-manager-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:09.279: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 3 12:25:09.279: INFO: kube-scheduler-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:09.279: INFO: Container kube-scheduler ready: true, restart count 0 Jan 3 12:25:09.279: INFO: etcd-manager-events-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:09.279: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:25:09.279: INFO: kops-controller-54wzq started at 2023-01-03 12:07:54 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:09.279: INFO: Container kops-controller ready: true, restart count 0 Jan 3 12:25:09.279: INFO: kube-apiserver-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+2 container statuses recorded) Jan 3 12:25:09.279: INFO: Container healthcheck ready: true, restart count 0 Jan 3 12:25:09.279: INFO: Container kube-apiserver ready: true, restart count 1 Jan 3 12:25:09.279: INFO: ebs-csi-node-5hfnh started at 2023-01-03 12:07:53 +0000 UTC (0+3 container statuses recorded) Jan 3 12:25:09.279: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:25:09.279: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:09.279: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:25:09.279: INFO: cilium-jrcck started at 2023-01-03 12:07:53 +0000 UTC (1+1 container statuses recorded) Jan 3 12:25:09.279: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:25:09.279: INFO: Container cilium-agent ready: true, restart count 1 Jan 3 12:25:09.858: INFO: Latency metrics for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:25:09.858: INFO: Logging node info for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:10.036: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-52-37.ap-northeast-2.compute.internal aad66781-b33c-418b-b9d9-bf279890bb2f 21702 0 2023-01-03 12:09:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-52-37.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-060b35b9149f1ba66"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-060b35b9149f1ba66,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:47 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:47 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:47 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:24:47 +0000 UTC,LastTransitionTime:2023-01-03 12:09:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.52.37,},NodeAddress{Type:ExternalIP,Address:3.38.101.72,},NodeAddress{Type:Hostname,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-38-101-72.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2e82df6bf35f54d2aca0b4e9167917,SystemUUID:ec2e82df-6bf3-5f54-d2ac-a0b4e9167917,BootID:53771566-a9e2-4e41-a8e2-f140d3f619b9,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0432a2d2bd3b2dc32 kubernetes.io/csi/ebs.csi.aws.com^vol-04dfa5d672d819a83 kubernetes.io/csi/ebs.csi.aws.com^vol-0f447687f1c061b01],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-04dfa5d672d819a83,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0f447687f1c061b01,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0432a2d2bd3b2dc32,DevicePath:,},},Config:nil,},} Jan 3 12:25:10.036: INFO: Logging kubelet events for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:10.215: INFO: Logging pods the kubelet thinks is on node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:10.407: INFO: cilium-8n4td started at 2023-01-03 12:09:19 +0000 UTC (1+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:25:10.407: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:25:10.407: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgpzl started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:10.407: INFO: webserver-deployment-5d9fdcc779-tws27 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:10.407: INFO: ebs-csi-node-qg9wn started at 2023-01-03 12:09:19 +0000 UTC (0+3 container statuses recorded) Jan 3 12:25:10.407: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:25:10.407: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:25:10.407: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:25:10.407: INFO: inline-volume-tester2-xcxjr started at 2023-01-03 12:24:46 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:25:10.407: INFO: webserver-deployment-5d9fdcc779-nwlm7 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:10.407: INFO: pod1 started at 2023-01-03 12:24:07 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container agnhost-container ready: false, restart count 0 Jan 3 12:25:10.407: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-5h749 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:10.407: INFO: inline-volume-tester-mmvnp started at 2023-01-03 12:20:40 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:25:10.407: INFO: webserver-deployment-5d9fdcc779-zvnv5 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container httpd ready: false, restart count 0 Jan 3 12:25:10.407: INFO: pod-e8e32ffc-f03a-43d1-8ea3-72be400f27cc started at 2023-01-03 12:24:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container test-container ready: false, restart count 0 Jan 3 12:25:10.407: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-fdrvf started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:10.407: INFO: test-ss-0 started at 2023-01-03 12:23:08 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:10.407: INFO: affinity-nodeport-timeout-6l7c7 started at 2023-01-03 12:24:42 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container affinity-nodeport-timeout ready: false, restart count 0 Jan 3 12:25:10.407: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2kwc4 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:10.407: INFO: netserver-3 started at 2023-01-03 12:24:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container webserver ready: false, restart count 0 Jan 3 12:25:10.407: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgkkd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:10.407: INFO: sample-apiserver-deployment-7cdc9f5bf7-tmj6p started at 2023-01-03 12:23:11 +0000 UTC (0+2 container statuses recorded) Jan 3 12:25:10.407: INFO: Container etcd ready: false, restart count 0 Jan 3 12:25:10.407: INFO: Container sample-apiserver ready: false, restart count 0 Jan 3 12:25:10.407: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-25x24 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:10.407: INFO: hostpath-symlink-prep-volume-6653 started at 2023-01-03 12:24:05 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container init-volume-volume-6653 ready: false, restart count 0 Jan 3 12:25:10.407: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:10.407: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vs6ts started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:10.407: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-xxg5r started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:10.407: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-k28rp started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.407: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:25:10.408: INFO: netserver-3 started at 2023-01-03 12:20:03 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.408: INFO: Container webserver ready: true, restart count 0 Jan 3 12:25:10.408: INFO: netserver-3 started at 2023-01-03 12:20:13 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.408: INFO: Container webserver ready: true, restart count 0 Jan 3 12:25:10.408: INFO: sample-crd-conversion-webhook-deployment-67c86bcf4b-bkg7c started at 2023-01-03 12:23:19 +0000 UTC (0+1 container statuses recorded) Jan 3 12:25:10.408: INFO: Container sample-crd-conversion-webhook ready: false, restart count 0 Jan 3 12:25:10.998: INFO: Latency metrics for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:10.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "nettest-4525" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\sprestop\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 Jan 3 12:24:04.306: Unexpected error: <*errors.errorString | 0xc0002482c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107from junit_13.xml
{"msg":"PASSED [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified","total":-1,"completed":10,"skipped":64,"failed":0} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 12:19:02.168: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 3 12:19:03.949: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:06.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:08.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:10.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:12.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:14.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:16.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:18.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:20.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:22.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:24.130: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:26.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:28.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:30.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:32.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:34.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:36.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:38.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:40.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:42.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:44.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:46.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:48.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:50.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:52.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:54.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:56.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:19:58.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:00.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:02.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:04.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:06.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:08.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:10.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:12.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:14.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:16.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:18.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:20.135: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:22.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:24.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:26.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:28.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:30.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:32.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:34.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:36.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:38.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:40.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:42.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:44.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:46.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:48.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:50.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:52.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:54.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:56.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:20:58.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:00.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:02.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:04.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:06.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:08.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:10.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:12.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:14.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:16.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:18.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:20.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:22.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:24.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:26.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:28.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:30.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:32.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:34.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:36.131: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:38.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:40.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:42.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:44.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:46.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:48.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:50.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:52.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:54.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:56.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:21:58.131: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:00.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:02.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:04.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:06.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:08.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:10.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:12.136: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:14.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:16.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:18.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:20.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:22.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:24.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:26.131: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:28.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:30.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:32.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:34.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:36.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:38.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:40.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:42.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:44.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:46.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:48.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:50.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:52.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:54.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:56.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:22:58.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:00.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:02.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:04.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:06.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:08.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:10.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:12.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:14.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:16.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:18.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:20.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:22.132: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:24.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:26.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:28.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:30.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:32.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:34.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:36.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:38.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:40.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:42.131: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:44.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:46.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:48.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:50.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:52.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:54.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:56.131: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:23:58.129: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:00.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:02.127: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:04.128: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:04.305: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 3 12:24:04.306: FAIL: Unexpected error: <*errors.errorString | 0xc0002482c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc004a05998, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 +0x94 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:63 +0x3cb k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000235520, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "container-lifecycle-hook-3696". �[1mSTEP�[0m: Found 11 events. Jan 3 12:24:04.485: INFO: At 2023-01-03 12:19:03 +0000 UTC - event for pod-handle-http-request: {default-scheduler } Scheduled: Successfully assigned container-lifecycle-hook-3696/pod-handle-http-request to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:24:04.485: INFO: At 2023-01-03 12:19:04 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d5d4a7eddfdd26002ccbb08a58dd83dba511fff6cb88f153625909cbc5b50eb6": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:04.485: INFO: At 2023-01-03 12:19:17 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a981a71628ae9945c4a3f4ac8a20215f4defaf2e6e5dccdae1885c4bb2d7b49e": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:04.485: INFO: At 2023-01-03 12:19:31 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0a9dbd9855c74ec0374732300ba4363d343527e7d3255c945f96dbd49a773ea1": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:04.485: INFO: At 2023-01-03 12:19:46 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "6dc8247cd2e0f3cb28aa70b771d9d8e43142cc27a12b68d7a4cdadee5cab851c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:04.485: INFO: At 2023-01-03 12:19:57 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "72603a30aa43234c60759cf5e58e892056768ea3344a0ce81d58258f4c8910ed": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:04.485: INFO: At 2023-01-03 12:20:10 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e721de2c3222028afbc5175288be7c70b853a81bdff07eb9250f9580d9b07449": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:04.485: INFO: At 2023-01-03 12:20:21 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "53722062942e754b998763618dc1029d3404a97679715bc5866b8c2b1cc2543f": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:04.485: INFO: At 2023-01-03 12:20:33 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c6f60e18ddeb1fcd77ef0b17d80a30927681e6d35b5de46ca609173d9951ab03": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:04.485: INFO: At 2023-01-03 12:20:48 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b4c492090a9fb1d059c40ac2215549639b9b615ef21997d796cc957ab1b24685": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:04.485: INFO: At 2023-01-03 12:21:01 +0000 UTC - event for pod-handle-http-request: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "56a595694259755569875dcc2daddf4b9e2a347db978079cd7100b2c1d008579": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:04.663: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 12:24:04.663: INFO: pod-handle-http-request ip-172-20-33-54.ap-northeast-2.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:19:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:19:03 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:19:03 +0000 UTC ContainersNotReady containers with unready status: [agnhost-container]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-03 12:19:03 +0000 UTC }] Jan 3 12:24:04.663: INFO: Jan 3 12:24:04.845: INFO: Unable to fetch container-lifecycle-hook-3696/pod-handle-http-request/agnhost-container logs: the server rejected our request for an unknown reason (get pods pod-handle-http-request) Jan 3 12:24:05.024: INFO: Logging node info for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:24:05.202: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-33-54.ap-northeast-2.compute.internal feeb853a-f938-421c-a48c-593d753497df 20908 0 2023-01-03 12:09:24 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-33-54.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-33-54.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02f9cef67ede2f5b0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-03 12:17:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2023-01-03 12:17:46 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {e2e.test Update v1 2023-01-03 12:17:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-02f9cef67ede2f5b0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:23:21 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:23:21 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:23:21 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:23:21 +0000 UTC,LastTransitionTime:2023-01-03 12:09:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.33.54,},NodeAddress{Type:ExternalIP,Address:43.201.108.232,},NodeAddress{Type:Hostname,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-108-232.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2b83476f0693e43ae0a06bf0db9bb4,SystemUUID:ec2b8347-6f06-93e4-3ae0-a06bf0db9bb4,BootID:2182b644-e5a3-4e7d-a07b-8b550578833d,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e kubernetes.io/csi/ebs.csi.aws.com^vol-00f36f534313dc59d kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-00f36f534313dc59d,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e,DevicePath:,},},Config:nil,},} Jan 3 12:24:05.202: INFO: Logging kubelet events for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:24:05.382: INFO: Logging pods the kubelet thinks is on node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:24:05.570: INFO: ebs-csi-node-lpbdv started at 2023-01-03 12:09:25 +0000 UTC (0+3 container statuses recorded) Jan 3 12:24:05.570: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:24:05.570: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:05.570: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:24:05.570: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h84k7 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:05.570: INFO: suspend-false-to-true-ssg9g started at 2023-01-03 12:19:04 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container c ready: true, restart count 0 Jan 3 12:24:05.570: INFO: hostexec-ip-172-20-33-54.ap-northeast-2.compute.internal-wgxlb started at 2023-01-03 12:19:06 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:24:05.570: INFO: pod-subpath-test-preprovisionedpv-rdzh started at 2023-01-03 12:19:26 +0000 UTC (1+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Init container init-volume-preprovisionedpv-rdzh ready: false, restart count 0 Jan 3 12:24:05.570: INFO: Container test-container-subpath-preprovisionedpv-rdzh ready: false, restart count 0 Jan 3 12:24:05.570: INFO: netserver-0 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:05.570: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4rndx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:05.570: INFO: webserver-deployment-5d9fdcc779-bn7jl started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:05.570: INFO: webserver-deployment-5d9fdcc779-nq7v8 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:05.570: INFO: webserver-deployment-5d9fdcc779-6b7jn started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:05.570: INFO: inline-volume-tester-nch9r started at 2023-01-03 12:18:41 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:24:05.570: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jtqtg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:05.570: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-m9mmt started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:05.570: INFO: pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521 started at 2023-01-03 12:19:30 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container env-test ready: false, restart count 0 Jan 3 12:24:05.570: INFO: netserver-0 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:05.570: INFO: busybox-113a8ec1-8769-4eed-b07b-3de46bcbab1a started at 2023-01-03 12:23:54 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container busybox ready: false, restart count 0 Jan 3 12:24:05.570: INFO: cilium-zp2v2 started at 2023-01-03 12:09:25 +0000 UTC (1+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:24:05.570: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:24:05.570: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-dkdq8 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:05.570: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-8wctd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:05.570: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vplwx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:05.570: INFO: pode29dd966-b8d7-4a8b-959e-281433e89dbd started at 2023-01-03 12:22:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container nginx ready: false, restart count 0 Jan 3 12:24:05.570: INFO: pod-handle-http-request started at 2023-01-03 12:19:03 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container agnhost-container ready: false, restart count 0 Jan 3 12:24:05.570: INFO: pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67 started at 2023-01-03 12:18:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:24:05.570: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4fwrt started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:05.570: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2hpz5 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:05.570: INFO: suspend-false-to-true-whr7v started at 2023-01-03 12:19:04 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container c ready: true, restart count 0 Jan 3 12:24:05.570: INFO: netserver-0 started at 2023-01-03 12:24:01 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:05.570: INFO: pod-ff0cde31-130b-41cc-82ed-a17554f19830 started at 2023-01-03 12:24:03 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container test-container ready: false, restart count 0 Jan 3 12:24:05.570: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2g27g started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:05.570: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:06.217: INFO: Latency metrics for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:24:06.217: INFO: Logging node info for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:24:06.395: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-52.ap-northeast-2.compute.internal c3f1ba3a-309d-47d4-9106-f4b4ecf80ce1 19624 0 2023-01-03 12:09:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-52.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-37-52.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0188365058f7426fb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-03 12:17:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-03 12:19:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0188365058f7426fb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.52,},NodeAddress{Type:ExternalIP,Address:54.180.156.67,},NodeAddress{Type:Hostname,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-180-156-67.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2beb620baecd9cbda3e3db4fc66fe2,SystemUUID:ec2beb62-0bae-cd9c-bda3-e3db4fc66fe2,BootID:e8a83e85-3d73-4732-8b1e-2d93bbf7f6bc,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:24:06.396: INFO: Logging kubelet events for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:24:06.576: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:24:06.765: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-7k7mx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.765: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:06.765: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-pjqlp started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.765: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:06.765: INFO: hostexec-ip-172-20-37-52.ap-northeast-2.compute.internal-5jxz7 started at 2023-01-03 12:20:49 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.765: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:24:06.765: INFO: webserver-deployment-5d9fdcc779-4tszx started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.765: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:06.765: INFO: cilium-v6smb started at 2023-01-03 12:09:20 +0000 UTC (1+1 container statuses recorded) Jan 3 12:24:06.765: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:24:06.765: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:24:06.765: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-kkt4f started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.765: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:06.765: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jzf8m started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:06.766: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-w42zg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:06.766: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-l9xkv started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:06.766: INFO: ebs-csi-node-fkxkq started at 2023-01-03 12:09:20 +0000 UTC (0+3 container statuses recorded) Jan 3 12:24:06.766: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:24:06.766: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:06.766: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:24:06.766: INFO: pod-31fd5f45-887c-4042-895a-ab4f7b50670f started at 2023-01-03 12:20:38 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:24:06.766: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-g8zzx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:06.766: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-d9qh7 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:06.766: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vrhrc started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:06.766: INFO: pod-subpath-test-preprovisionedpv-4ngh started at 2023-01-03 12:21:10 +0000 UTC (2+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Init container init-volume-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:24:06.766: INFO: Init container test-init-subpath-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:24:06.766: INFO: Container test-container-subpath-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:24:06.766: INFO: coredns-867df8f45c-js4mj started at 2023-01-03 12:10:00 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container coredns ready: true, restart count 0 Jan 3 12:24:06.766: INFO: webserver-deployment-5d9fdcc779-htl5s started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:06.766: INFO: hostpath-symlink-prep-provisioning-1614 started at 2023-01-03 12:18:57 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container init-volume-provisioning-1614 ready: false, restart count 0 Jan 3 12:24:06.766: INFO: netserver-1 started at 2023-01-03 12:24:01 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:06.766: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-j8lbc started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:06.766: INFO: netserver-1 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:06.766: INFO: netserver-1 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container webserver ready: true, restart count 0 Jan 3 12:24:06.766: INFO: hostexec-ip-172-20-37-52.ap-northeast-2.compute.internal-w96g7 started at 2023-01-03 12:20:33 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:06.766: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:24:07.373: INFO: Latency metrics for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:24:07.373: INFO: Logging node info for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:24:07.551: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-48-181.ap-northeast-2.compute.internal 96573984-3972-4958-a21d-91e5b7179ec3 20641 0 2023-01-03 12:09:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-48-181.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-48-181.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2507":"ip-172-20-48-181.ap-northeast-2.compute.internal","ebs.csi.aws.com":"i-0c02313085f6ea916"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:17:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {e2e.test Update v1 2023-01-03 12:17:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0c02313085f6ea916,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.48.181,},NodeAddress{Type:ExternalIP,Address:43.201.60.170,},NodeAddress{Type:Hostname,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-60-170.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2eb3762b49570c6e3d8607a5e516da,SystemUUID:ec2eb376-2b49-570c-6e3d-8607a5e516da,BootID:bd8a3c9d-ca15-400a-bb20-3a3e2aa04c7f,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:24:07.551: INFO: Logging kubelet events for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:24:07.731: INFO: Logging pods the kubelet thinks is on node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:24:07.917: INFO: csi-hostpathplugin-0 started at 2023-01-03 12:17:59 +0000 UTC (0+7 container statuses recorded) Jan 3 12:24:07.917: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:24:07.917: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:24:07.917: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:24:07.917: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 3 12:24:07.917: INFO: Container hostpath ready: true, restart count 0 Jan 3 12:24:07.917: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:07.917: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:24:07.917: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4jx8k started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:07.917: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-v2c2j started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:07.917: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-p2j42 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: false, restart count 0 Jan 3 12:24:07.917: INFO: inline-volume-tester-sqtq6 started at 2023-01-03 12:17:59 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 3 12:24:07.917: INFO: pod-87dc35fb-f86b-4b15-8720-1d77c6521c5b started at 2023-01-03 12:20:15 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:24:07.917: INFO: cilium-nsj92 started at 2023-01-03 12:09:18 +0000 UTC (1+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:24:07.917: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:24:07.917: INFO: coredns-867df8f45c-4fzzr started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container coredns ready: true, restart count 0 Jan 3 12:24:07.917: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-gpxqr started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:07.917: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-czr66 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: false, restart count 0 Jan 3 12:24:07.917: INFO: netserver-2 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:07.917: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h2gmg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:07.917: INFO: netserver-2 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:07.917: INFO: ebs-csi-node-5drk2 started at 2023-01-03 12:09:18 +0000 UTC (0+3 container statuses recorded) Jan 3 12:24:07.917: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:24:07.917: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:07.917: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:24:07.917: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-b9462 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:07.917: INFO: webserver-deployment-5d9fdcc779-f9rvm started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:07.917: INFO: inline-volume-tester2-6gs8h started at 2023-01-03 12:18:18 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:24:07.917: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-c9vtv started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:07.917: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-ldphj started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:07.917: INFO: coredns-autoscaler-557ccb4c66-pj66n started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container autoscaler ready: true, restart count 0 Jan 3 12:24:07.917: INFO: hostexec-ip-172-20-48-181.ap-northeast-2.compute.internal-wl8zb started at 2023-01-03 12:20:11 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:24:07.917: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-nkxzd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:07.917: INFO: webserver-deployment-5d9fdcc779-g9bm5 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:07.917: INFO: netserver-2 started at 2023-01-03 12:24:01 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:07.917: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:08.582: INFO: Latency metrics for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:24:08.582: INFO: Logging node info for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:24:08.760: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-50-77.ap-northeast-2.compute.internal 8fb0fd08-c4e3-467d-a3ed-803fd4fc6cc5 20092 0 2023-01-03 12:07:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-50-77.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09592d5deddfe8924"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-03 12:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-03 12:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-03 12:07:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:08:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-03 12:09:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-09592d5deddfe8924,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3892264960 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3787407360 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:08:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.50.77,},NodeAddress{Type:ExternalIP,Address:15.165.77.221,},NodeAddress{Type:Hostname,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-15-165-77-221.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a153b3c5c2f63ca65051344522649,SystemUUID:ec2a153b-3c5c-2f63-ca65-051344522649,BootID:ef540d65-debc-49b7-93e8-d55cfe4956fd,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.15 registry.k8s.io/kube-apiserver-amd64:v1.23.15],SizeBytes:136583630,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.15 registry.k8s.io/kube-controller-manager-amd64:v1.23.15],SizeBytes:126389044,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.15 registry.k8s.io/kube-scheduler-amd64:v1.23.15],SizeBytes:54864177,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.1],SizeBytes:42982346,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.1],SizeBytes:42804933,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:26802430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.1],SizeBytes:4967349,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:24:08.761: INFO: Logging kubelet events for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:24:08.944: INFO: Logging pods the kubelet thinks is on node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:24:09.127: INFO: ebs-csi-controller-74ccd5888c-qh2jn started at 2023-01-03 12:07:55 +0000 UTC (0+5 container statuses recorded) Jan 3 12:24:09.128: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:24:09.128: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:24:09.128: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:24:09.128: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:24:09.128: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:09.128: INFO: dns-controller-867784b75c-fs862 started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:09.128: INFO: Container dns-controller ready: true, restart count 0 Jan 3 12:24:09.128: INFO: kube-controller-manager-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:09.128: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 3 12:24:09.128: INFO: kube-scheduler-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:09.128: INFO: Container kube-scheduler ready: true, restart count 0 Jan 3 12:24:09.128: INFO: etcd-manager-events-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:09.128: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:24:09.128: INFO: etcd-manager-main-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:09.128: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:24:09.128: INFO: cilium-operator-d84d55876-jlw9m started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:09.128: INFO: Container cilium-operator ready: true, restart count 1 Jan 3 12:24:09.128: INFO: kube-apiserver-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+2 container statuses recorded) Jan 3 12:24:09.128: INFO: Container healthcheck ready: true, restart count 0 Jan 3 12:24:09.128: INFO: Container kube-apiserver ready: true, restart count 1 Jan 3 12:24:09.128: INFO: ebs-csi-node-5hfnh started at 2023-01-03 12:07:53 +0000 UTC (0+3 container statuses recorded) Jan 3 12:24:09.128: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:24:09.128: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:09.128: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:24:09.128: INFO: cilium-jrcck started at 2023-01-03 12:07:53 +0000 UTC (1+1 container statuses recorded) Jan 3 12:24:09.128: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:24:09.128: INFO: Container cilium-agent ready: true, restart count 1 Jan 3 12:24:09.128: INFO: kops-controller-54wzq started at 2023-01-03 12:07:54 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:09.128: INFO: Container kops-controller ready: true, restart count 0 Jan 3 12:24:09.719: INFO: Latency metrics for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:24:09.719: INFO: Logging node info for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:24:09.897: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-52-37.ap-northeast-2.compute.internal aad66781-b33c-418b-b9d9-bf279890bb2f 20086 0 2023-01-03 12:09:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-52-37.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-060b35b9149f1ba66"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-03 12:17:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-060b35b9149f1ba66,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.52.37,},NodeAddress{Type:ExternalIP,Address:3.38.101.72,},NodeAddress{Type:Hostname,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-38-101-72.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2e82df6bf35f54d2aca0b4e9167917,SystemUUID:ec2e82df-6bf3-5f54-d2ac-a0b4e9167917,BootID:53771566-a9e2-4e41-a8e2-f140d3f619b9,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0432a2d2bd3b2dc32 kubernetes.io/csi/ebs.csi.aws.com^vol-04dfa5d672d819a83],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0432a2d2bd3b2dc32,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-04dfa5d672d819a83,DevicePath:,},},Config:nil,},} Jan 3 12:24:09.898: INFO: Logging kubelet events for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:24:10.081: INFO: Logging pods the kubelet thinks is on node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:24:10.273: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgpzl started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.273: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:10.275: INFO: webserver-deployment-5d9fdcc779-tws27 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.275: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:10.275: INFO: ebs-csi-node-qg9wn started at 2023-01-03 12:09:19 +0000 UTC (0+3 container statuses recorded) Jan 3 12:24:10.275: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:24:10.275: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:10.275: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:24:10.275: INFO: cilium-8n4td started at 2023-01-03 12:09:19 +0000 UTC (1+1 container statuses recorded) Jan 3 12:24:10.275: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:24:10.275: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:24:10.275: INFO: webserver-deployment-5d9fdcc779-nwlm7 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.275: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:10.275: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-5h749 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.275: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:10.275: INFO: pod1 started at 2023-01-03 12:24:07 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container agnhost-container ready: false, restart count 0 Jan 3 12:24:10.276: INFO: webserver-deployment-5d9fdcc779-zvnv5 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:10.276: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-fdrvf started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:10.276: INFO: inline-volume-tester-mmvnp started at 2023-01-03 12:20:40 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:24:10.276: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2kwc4 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:10.276: INFO: test-ss-0 started at 2023-01-03 12:23:08 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:10.276: INFO: netserver-3 started at 2023-01-03 12:24:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:10.276: INFO: sample-apiserver-deployment-7cdc9f5bf7-tmj6p started at 2023-01-03 12:23:11 +0000 UTC (0+2 container statuses recorded) Jan 3 12:24:10.276: INFO: Container etcd ready: false, restart count 0 Jan 3 12:24:10.276: INFO: Container sample-apiserver ready: false, restart count 0 Jan 3 12:24:10.276: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-25x24 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:10.276: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgkkd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:10.276: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:10.276: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vs6ts started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:10.276: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-xxg5r started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:10.276: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-k28rp started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:10.276: INFO: netserver-3 started at 2023-01-03 12:20:03 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container webserver ready: true, restart count 0 Jan 3 12:24:10.276: INFO: netserver-3 started at 2023-01-03 12:20:13 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container webserver ready: true, restart count 0 Jan 3 12:24:10.276: INFO: sample-crd-conversion-webhook-deployment-67c86bcf4b-bkg7c started at 2023-01-03 12:23:19 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container sample-crd-conversion-webhook ready: false, restart count 0 Jan 3 12:24:10.276: INFO: hostpath-symlink-prep-volume-6653 started at 2023-01-03 12:24:05 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:10.276: INFO: Container init-volume-volume-6653 ready: false, restart count 0 Jan 3 12:24:10.871: INFO: Latency metrics for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:24:10.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-3696" for this suite.
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sSecrets\sshould\sbe\sconsumable\svia\sthe\senvironment\s\[NodeConformance\]\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 3 12:24:33.940: Unexpected error: <*errors.errorString | 0xc003926f10>: { s: "expected pod \"pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521\" success: Gave up after waiting 5m0s for pod \"pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521\" to be \"Succeeded or Failed\"", } expected pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521" success: Gave up after waiting 5m0s for pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521" to be "Succeeded or Failed" occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:770from junit_15.xml
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":90,"failed":0} [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 12:19:28.896: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating secret secrets-6292/secret-test-cfca615a-ebe7-4163-bdf8-ba650af716b7 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 3 12:19:30.495: INFO: Waiting up to 5m0s for pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521" in namespace "secrets-6292" to be "Succeeded or Failed" Jan 3 12:19:30.673: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 177.573516ms Jan 3 12:19:32.852: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2.356252189s Jan 3 12:19:35.029: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4.534048059s Jan 3 12:19:37.208: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 6.712377228s Jan 3 12:19:39.387: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 8.891268585s Jan 3 12:19:41.565: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 11.069694182s Jan 3 12:19:43.743: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 13.248086229s Jan 3 12:19:45.922: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 15.426854433s Jan 3 12:19:48.101: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 17.605517949s Jan 3 12:19:50.279: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 19.783596436s Jan 3 12:19:52.457: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 21.961384706s Jan 3 12:19:54.635: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 24.139627851s Jan 3 12:19:56.812: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 26.317189822s Jan 3 12:19:58.991: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 28.495441547s Jan 3 12:20:01.169: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 30.67336717s Jan 3 12:20:03.346: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 32.85105576s Jan 3 12:20:05.525: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 35.029761697s Jan 3 12:20:07.704: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 37.208475876s Jan 3 12:20:09.883: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 39.387378402s Jan 3 12:20:12.061: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 41.565915798s Jan 3 12:20:14.239: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 43.744108937s Jan 3 12:20:16.418: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 45.922883197s Jan 3 12:20:18.597: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 48.101462652s Jan 3 12:20:20.776: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 50.280257751s Jan 3 12:20:22.954: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 52.458636744s Jan 3 12:20:25.131: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 54.636010303s Jan 3 12:20:27.310: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 56.814638197s Jan 3 12:20:29.489: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 58.993281462s Jan 3 12:20:31.666: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.171213948s Jan 3 12:20:33.844: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.348541954s Jan 3 12:20:36.023: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.527388148s Jan 3 12:20:38.206: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.71089886s Jan 3 12:20:40.385: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.889927545s Jan 3 12:20:42.564: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m12.068292945s Jan 3 12:20:44.741: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.246063718s Jan 3 12:20:46.919: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.423824512s Jan 3 12:20:49.097: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.602203857s Jan 3 12:20:51.276: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.780537044s Jan 3 12:20:53.453: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.958069645s Jan 3 12:20:55.631: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.135744863s Jan 3 12:20:57.810: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m27.314266632s Jan 3 12:20:59.987: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m29.491884373s Jan 3 12:21:02.166: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m31.670416723s Jan 3 12:21:04.344: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m33.848542737s Jan 3 12:21:06.523: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.027223346s Jan 3 12:21:08.701: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.205941467s Jan 3 12:21:10.880: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m40.384719861s Jan 3 12:21:13.058: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.562293366s Jan 3 12:21:15.236: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.740696892s Jan 3 12:21:17.414: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.918805031s Jan 3 12:21:19.593: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m49.097413887s Jan 3 12:21:21.770: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m51.274985532s Jan 3 12:21:23.948: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m53.452981926s Jan 3 12:21:26.126: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m55.630469885s Jan 3 12:21:28.304: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m57.80833161s Jan 3 12:21:30.481: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 1m59.986093966s Jan 3 12:21:32.660: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.164971529s Jan 3 12:21:34.838: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.342641271s Jan 3 12:21:37.015: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.520106509s Jan 3 12:21:39.193: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m8.697615218s Jan 3 12:21:41.372: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m10.876341128s Jan 3 12:21:43.549: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m13.053644205s Jan 3 12:21:45.727: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m15.232148105s Jan 3 12:21:47.906: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m17.410700113s Jan 3 12:21:50.084: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m19.588779589s Jan 3 12:21:52.263: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m21.767262169s Jan 3 12:21:54.441: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m23.945311104s Jan 3 12:21:56.620: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.124306016s Jan 3 12:21:58.799: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.303251543s Jan 3 12:22:00.978: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.482650793s Jan 3 12:22:03.155: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.660184s Jan 3 12:22:05.333: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.83807556s Jan 3 12:22:07.512: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m37.016611783s Jan 3 12:22:09.689: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m39.194212695s Jan 3 12:22:11.867: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m41.372073072s Jan 3 12:22:14.046: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m43.550956626s Jan 3 12:22:16.225: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m45.729309387s Jan 3 12:22:18.402: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m47.906838701s Jan 3 12:22:20.581: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m50.08561182s Jan 3 12:22:22.758: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.263205185s Jan 3 12:22:24.935: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.440190822s Jan 3 12:22:27.113: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.618157629s Jan 3 12:22:29.292: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.79638611s Jan 3 12:22:31.470: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.974286674s Jan 3 12:22:33.647: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m3.152060124s Jan 3 12:22:35.824: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m5.328645916s Jan 3 12:22:38.002: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m7.506918062s Jan 3 12:22:40.180: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m9.684426737s Jan 3 12:22:42.357: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m11.861552251s Jan 3 12:22:44.534: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m14.038654864s Jan 3 12:22:46.712: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m16.216706734s Jan 3 12:22:48.889: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m18.394059825s Jan 3 12:22:51.066: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m20.570874985s Jan 3 12:22:53.244: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.748708979s Jan 3 12:22:55.422: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.926306509s Jan 3 12:22:57.599: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m27.103664601s Jan 3 12:22:59.776: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m29.280643028s Jan 3 12:23:01.954: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m31.458501942s Jan 3 12:23:04.131: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m33.635231235s Jan 3 12:23:06.308: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m35.813000072s Jan 3 12:23:08.486: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m37.991174115s Jan 3 12:23:10.663: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m40.167983398s Jan 3 12:23:12.840: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m42.344963203s Jan 3 12:23:15.017: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m44.521965116s Jan 3 12:23:17.194: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m46.698698332s Jan 3 12:23:19.373: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m48.877232417s Jan 3 12:23:21.550: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m51.054955238s Jan 3 12:23:23.727: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m53.232098207s Jan 3 12:23:25.904: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m55.40891087s Jan 3 12:23:28.082: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m57.587163538s Jan 3 12:23:30.260: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 3m59.764867445s Jan 3 12:23:32.438: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m1.942662032s Jan 3 12:23:34.616: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m4.120424964s Jan 3 12:23:36.793: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m6.29726871s Jan 3 12:23:38.970: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m8.47427074s Jan 3 12:23:41.148: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m10.652435507s Jan 3 12:23:43.326: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m12.830973159s Jan 3 12:23:45.504: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m15.00829695s Jan 3 12:23:47.681: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m17.186168558s Jan 3 12:23:49.859: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m19.363328705s Jan 3 12:23:52.037: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m21.541594022s Jan 3 12:23:54.214: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m23.718561455s Jan 3 12:23:56.390: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m25.895125351s Jan 3 12:23:58.568: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.073070263s Jan 3 12:24:00.745: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.249913486s Jan 3 12:24:02.923: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m32.427765762s Jan 3 12:24:05.100: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m34.604479757s Jan 3 12:24:07.277: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m36.78145988s Jan 3 12:24:09.454: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m38.959023313s Jan 3 12:24:11.632: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m41.136526635s Jan 3 12:24:13.808: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m43.313212249s Jan 3 12:24:15.985: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m45.490116706s Jan 3 12:24:18.163: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m47.66730775s Jan 3 12:24:20.340: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.844382853s Jan 3 12:24:22.518: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.02256331s Jan 3 12:24:24.695: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.199589402s Jan 3 12:24:26.873: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.377566742s Jan 3 12:24:29.051: INFO: Pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.555339291s Jan 3 12:24:31.406: INFO: Failed to get logs from node "ip-172-20-33-54.ap-northeast-2.compute.internal" pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521" container "env-test": the server rejected our request for an unknown reason (get pods pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521) �[1mSTEP�[0m: delete the pod Jan 3 12:24:31.585: INFO: Waiting for pod pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521 to disappear Jan 3 12:24:31.762: INFO: Pod pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521 still exists Jan 3 12:24:33.763: INFO: Waiting for pod pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521 to disappear Jan 3 12:24:33.940: INFO: Pod pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521 no longer exists Jan 3 12:24:33.940: FAIL: Unexpected error: <*errors.errorString | 0xc003926f10>: { s: "expected pod \"pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521\" success: Gave up after waiting 5m0s for pod \"pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521\" to be \"Succeeded or Failed\"", } expected pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521" success: Gave up after waiting 5m0s for pod "pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521" to be "Succeeded or Failed" occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0x70d569e, {0x70f1a98, 0xc002d23138}, 0xc002d9f400, 0x0, {0xc002d23168, 0x6, 0x6}, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:770 +0x176 k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:567 k8s.io/kubernetes/test/e2e/common/node.glob..func19.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/secrets.go:126 +0x9cc k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0004b1a00, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "secrets-6292". �[1mSTEP�[0m: Found 11 events. Jan 3 12:24:34.118: INFO: At 2023-01-03 12:19:30 +0000 UTC - event for pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521: {default-scheduler } Scheduled: Successfully assigned secrets-6292/pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521 to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:24:34.118: INFO: At 2023-01-03 12:19:30 +0000 UTC - event for pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "65d20a1904999ddce55862f9cebb7cf3d91884fff4ebd9142d50a8d5b7e66435": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:34.118: INFO: At 2023-01-03 12:19:45 +0000 UTC - event for pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c63514c9c55e59bbdfc56ad024191d7882bd63ce62743762972e2cf6076c7369": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:34.118: INFO: At 2023-01-03 12:19:56 +0000 UTC - event for pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "03be8d9832b41aa068b449e0dae3ea9fe9b35da286492ab5520d522198b1c7da": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:34.118: INFO: At 2023-01-03 12:20:10 +0000 UTC - event for pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ca188c712722bb89b599a7a673f72e5bffd6d75c0e33ec6c5dd842cebfa148a7": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:34.118: INFO: At 2023-01-03 12:20:22 +0000 UTC - event for pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3f0c9d00c2f59f556e246184dfb721b33d6024689f28974f82c6ff685d273498": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:34.118: INFO: At 2023-01-03 12:20:33 +0000 UTC - event for pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d045a229471e2b566d568cccc9ee2d92dc78d57007ad68c0c3134eea51d59222": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:34.118: INFO: At 2023-01-03 12:20:46 +0000 UTC - event for pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2208de44bfd3bf55f1069b62f823aa5e7926f86bba91b552fd48324710d2122b": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:34.118: INFO: At 2023-01-03 12:20:59 +0000 UTC - event for pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "927caf524cad7e8344ca93b85eff0c462fdf1be39b7f009cfe989486e74f5b03": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:34.118: INFO: At 2023-01-03 12:21:13 +0000 UTC - event for pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b985e3c6bd751b3bc89473ef7863791eb487a1ee2c8b7953622d21f37a61046c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:34.118: INFO: At 2023-01-03 12:21:28 +0000 UTC - event for pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "360cc799e62ddb8af0e2adaf7f13c13a635266a1b24458eff64e1dbb416f1156": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:24:34.294: INFO: POD NODE PHASE GRACE CONDITIONS Jan 3 12:24:34.294: INFO: Jan 3 12:24:34.471: INFO: Logging node info for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:24:34.648: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-33-54.ap-northeast-2.compute.internal feeb853a-f938-421c-a48c-593d753497df 21341 0 2023-01-03 12:09:24 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-33-54.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-33-54.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-02f9cef67ede2f5b0"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-03 12:17:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2023-01-03 12:17:46 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {e2e.test Update v1 2023-01-03 12:17:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-02f9cef67ede2f5b0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:12 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:12 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:24:12 +0000 UTC,LastTransitionTime:2023-01-03 12:09:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:24:12 +0000 UTC,LastTransitionTime:2023-01-03 12:09:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.33.54,},NodeAddress{Type:ExternalIP,Address:43.201.108.232,},NodeAddress{Type:Hostname,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-33-54.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-108-232.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2b83476f0693e43ae0a06bf0db9bb4,SystemUUID:ec2b8347-6f06-93e4-3ae0-a06bf0db9bb4,BootID:2182b644-e5a3-4e7d-a07b-8b550578833d,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-001776acebc5c320e,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0dc0fec6852d19bbe,DevicePath:,},},Config:nil,},} Jan 3 12:24:34.648: INFO: Logging kubelet events for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:24:34.827: INFO: Logging pods the kubelet thinks is on node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:24:35.011: INFO: suspend-false-to-true-ssg9g started at 2023-01-03 12:19:04 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container c ready: true, restart count 0 Jan 3 12:24:35.011: INFO: netserver-0 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:35.011: INFO: ebs-csi-node-lpbdv started at 2023-01-03 12:09:25 +0000 UTC (0+3 container statuses recorded) Jan 3 12:24:35.011: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:24:35.011: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:35.011: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:24:35.011: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h84k7 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:35.011: INFO: webserver-deployment-5d9fdcc779-nq7v8 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:35.011: INFO: webserver-deployment-5d9fdcc779-6b7jn started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:35.011: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4rndx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:35.011: INFO: webserver-deployment-5d9fdcc779-bn7jl started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:35.011: INFO: netserver-0 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:35.011: INFO: inline-volume-tester-nch9r started at 2023-01-03 12:18:41 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:24:35.011: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jtqtg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:35.011: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-m9mmt started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:35.011: INFO: busybox-113a8ec1-8769-4eed-b07b-3de46bcbab1a started at 2023-01-03 12:23:54 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container busybox ready: false, restart count 0 Jan 3 12:24:35.011: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vplwx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:35.011: INFO: pode29dd966-b8d7-4a8b-959e-281433e89dbd started at 2023-01-03 12:22:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container nginx ready: false, restart count 0 Jan 3 12:24:35.011: INFO: cilium-zp2v2 started at 2023-01-03 12:09:25 +0000 UTC (1+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:24:35.011: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:24:35.011: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-dkdq8 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:35.011: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-8wctd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:35.011: INFO: pod-af9e6052-31a5-4ba7-a45b-0e61eb4fab67 started at 2023-01-03 12:18:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:24:35.011: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4fwrt started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:35.011: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2hpz5 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:35.011: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2g27g started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:35.011: INFO: suspend-false-to-true-whr7v started at 2023-01-03 12:19:04 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container c ready: true, restart count 0 Jan 3 12:24:35.011: INFO: netserver-0 started at 2023-01-03 12:24:01 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:35.011: INFO: pod-ff0cde31-130b-41cc-82ed-a17554f19830 started at 2023-01-03 12:24:03 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:35.011: INFO: Container test-container ready: false, restart count 0 Jan 3 12:24:35.624: INFO: Latency metrics for node ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:24:35.624: INFO: Logging node info for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:24:35.801: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-52.ap-northeast-2.compute.internal c3f1ba3a-309d-47d4-9106-f4b4ecf80ce1 19624 0 2023-01-03 12:09:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-52.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-37-52.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0188365058f7426fb"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-03 12:17:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-03 12:19:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0188365058f7426fb,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:19:52 +0000 UTC,LastTransitionTime:2023-01-03 12:09:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.52,},NodeAddress{Type:ExternalIP,Address:54.180.156.67,},NodeAddress{Type:Hostname,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-52.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-180-156-67.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2beb620baecd9cbda3e3db4fc66fe2,SystemUUID:ec2beb62-0bae-cd9c-bda3-e3db4fc66fe2,BootID:e8a83e85-3d73-4732-8b1e-2d93bbf7f6bc,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:24:35.802: INFO: Logging kubelet events for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:24:35.984: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:24:36.169: INFO: netserver-1 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:36.169: INFO: netserver-1 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container webserver ready: true, restart count 0 Jan 3 12:24:36.169: INFO: hostexec-ip-172-20-37-52.ap-northeast-2.compute.internal-w96g7 started at 2023-01-03 12:20:33 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:24:36.169: INFO: netserver-1 started at 2023-01-03 12:24:01 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:36.169: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-j8lbc started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:36.169: INFO: cilium-v6smb started at 2023-01-03 12:09:20 +0000 UTC (1+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:24:36.169: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:24:36.169: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-kkt4f started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:36.169: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jzf8m started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:36.169: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-7k7mx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:36.169: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-pjqlp started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:36.169: INFO: hostexec-ip-172-20-37-52.ap-northeast-2.compute.internal-5jxz7 started at 2023-01-03 12:20:49 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:24:36.169: INFO: webserver-deployment-5d9fdcc779-4tszx started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:36.169: INFO: ebs-csi-node-fkxkq started at 2023-01-03 12:09:20 +0000 UTC (0+3 container statuses recorded) Jan 3 12:24:36.169: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:24:36.169: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:36.169: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:24:36.169: INFO: pod-31fd5f45-887c-4042-895a-ab4f7b50670f started at 2023-01-03 12:20:38 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:24:36.169: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-g8zzx started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:36.169: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-w42zg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:36.169: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-l9xkv started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:36.169: INFO: coredns-867df8f45c-js4mj started at 2023-01-03 12:10:00 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container coredns ready: true, restart count 0 Jan 3 12:24:36.169: INFO: webserver-deployment-5d9fdcc779-htl5s started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:36.169: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-d9qh7 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:36.169: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vrhrc started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:36.169: INFO: pod-subpath-test-preprovisionedpv-4ngh started at 2023-01-03 12:21:10 +0000 UTC (2+1 container statuses recorded) Jan 3 12:24:36.169: INFO: Init container init-volume-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:24:36.169: INFO: Init container test-init-subpath-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:24:36.169: INFO: Container test-container-subpath-preprovisionedpv-4ngh ready: false, restart count 0 Jan 3 12:24:36.775: INFO: Latency metrics for node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:24:36.775: INFO: Logging node info for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:24:36.952: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-48-181.ap-northeast-2.compute.internal 96573984-3972-4958-a21d-91e5b7179ec3 20641 0 2023-01-03 12:09:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-48-181.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.hostpath.csi/node:ip-172-20-48-181.ap-northeast-2.compute.internal topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-2507":"ip-172-20-48-181.ap-northeast-2.compute.internal","ebs.csi.aws.com":"i-0c02313085f6ea916"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2023-01-03 12:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:17:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {e2e.test Update v1 2023-01-03 12:17:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-0c02313085f6ea916,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051652608 0} {<nil>} 3956692Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946795008 0} {<nil>} 3854292Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:22:32 +0000 UTC,LastTransitionTime:2023-01-03 12:09:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.48.181,},NodeAddress{Type:ExternalIP,Address:43.201.60.170,},NodeAddress{Type:Hostname,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-48-181.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-43-201-60-170.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2eb3762b49570c6e3d8607a5e516da,SystemUUID:ec2eb376-2b49-570c-6e3d-8607a5e516da,BootID:bd8a3c9d-ca15-400a-bb20-3a3e2aa04c7f,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:20293261,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/regression-issue-74839@sha256:b4f1d8d61bdad84bd50442d161d5460e4019d53e989b64220fdbc62fc87d76bf k8s.gcr.io/e2e-test-images/regression-issue-74839:1.2],SizeBytes:18651485,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:24:36.952: INFO: Logging kubelet events for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:24:37.131: INFO: Logging pods the kubelet thinks is on node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:24:37.318: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-czr66 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.318: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: false, restart count 0 Jan 3 12:24:37.318: INFO: netserver-2 started at 2023-01-03 12:20:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.318: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:37.318: INFO: pod-87dc35fb-f86b-4b15-8720-1d77c6521c5b started at 2023-01-03 12:20:15 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.318: INFO: Container write-pod ready: false, restart count 0 Jan 3 12:24:37.318: INFO: cilium-nsj92 started at 2023-01-03 12:09:18 +0000 UTC (1+1 container statuses recorded) Jan 3 12:24:37.318: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:24:37.318: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:24:37.318: INFO: coredns-867df8f45c-4fzzr started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.318: INFO: Container coredns ready: true, restart count 0 Jan 3 12:24:37.318: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-gpxqr started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.318: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:37.318: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-c9vtv started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.318: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:37.318: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-ldphj started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.318: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:37.318: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h2gmg started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.318: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:37.318: INFO: netserver-2 started at 2023-01-03 12:20:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.318: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:37.318: INFO: ebs-csi-node-5drk2 started at 2023-01-03 12:09:18 +0000 UTC (0+3 container statuses recorded) Jan 3 12:24:37.319: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:24:37.319: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:37.319: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:24:37.319: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-b9462 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.319: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:37.319: INFO: webserver-deployment-5d9fdcc779-f9rvm started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.319: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:37.319: INFO: inline-volume-tester2-6gs8h started at 2023-01-03 12:18:18 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.319: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:24:37.319: INFO: netserver-2 started at 2023-01-03 12:24:01 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.319: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:37.319: INFO: coredns-autoscaler-557ccb4c66-pj66n started at 2023-01-03 12:09:37 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.319: INFO: Container autoscaler ready: true, restart count 0 Jan 3 12:24:37.319: INFO: hostexec-ip-172-20-48-181.ap-northeast-2.compute.internal-wl8zb started at 2023-01-03 12:20:11 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.319: INFO: Container agnhost-container ready: true, restart count 0 Jan 3 12:24:37.319: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-nkxzd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.319: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:37.319: INFO: webserver-deployment-5d9fdcc779-g9bm5 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.319: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:37.319: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-v2c2j started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.319: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:37.319: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-p2j42 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.319: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: false, restart count 0 Jan 3 12:24:37.319: INFO: csi-hostpathplugin-0 started at 2023-01-03 12:17:59 +0000 UTC (0+7 container statuses recorded) Jan 3 12:24:37.319: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:24:37.319: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:24:37.319: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:24:37.319: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 3 12:24:37.319: INFO: Container hostpath ready: true, restart count 0 Jan 3 12:24:37.319: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:37.319: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:24:37.319: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4jx8k started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.319: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:37.319: INFO: inline-volume-tester-sqtq6 started at 2023-01-03 12:17:59 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:37.319: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 3 12:24:37.957: INFO: Latency metrics for node ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:24:37.958: INFO: Logging node info for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:24:38.135: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-50-77.ap-northeast-2.compute.internal 8fb0fd08-c4e3-467d-a3ed-803fd4fc6cc5 20092 0 2023-01-03 12:07:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-50-77.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-09592d5deddfe8924"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-03 12:07:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-03 12:07:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-03 12:07:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:08:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-03 12:09:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-09592d5deddfe8924,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3892264960 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3787407360 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:07:22 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:20:43 +0000 UTC,LastTransitionTime:2023-01-03 12:08:09 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.50.77,},NodeAddress{Type:ExternalIP,Address:15.165.77.221,},NodeAddress{Type:Hostname,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-50-77.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-15-165-77-221.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a153b3c5c2f63ca65051344522649,SystemUUID:ec2a153b-3c5c-2f63-ca65-051344522649,BootID:ef540d65-debc-49b7-93e8-d55cfe4956fd,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:231502799,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.15 registry.k8s.io/kube-apiserver-amd64:v1.23.15],SizeBytes:136583630,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.15 registry.k8s.io/kube-controller-manager-amd64:v1.23.15],SizeBytes:126389044,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.15 registry.k8s.io/kube-scheduler-amd64:v1.23.15],SizeBytes:54864177,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.26.0-beta.1],SizeBytes:42982346,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.26.0-beta.1],SizeBytes:42804933,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:26802430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.26.0-beta.1],SizeBytes:4967349,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 3 12:24:38.136: INFO: Logging kubelet events for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:24:38.314: INFO: Logging pods the kubelet thinks is on node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:24:38.496: INFO: etcd-manager-main-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:38.497: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:24:38.497: INFO: cilium-operator-d84d55876-jlw9m started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:38.497: INFO: Container cilium-operator ready: true, restart count 1 Jan 3 12:24:38.497: INFO: ebs-csi-controller-74ccd5888c-qh2jn started at 2023-01-03 12:07:55 +0000 UTC (0+5 container statuses recorded) Jan 3 12:24:38.497: INFO: Container csi-attacher ready: true, restart count 0 Jan 3 12:24:38.497: INFO: Container csi-provisioner ready: true, restart count 0 Jan 3 12:24:38.497: INFO: Container csi-resizer ready: true, restart count 0 Jan 3 12:24:38.497: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:24:38.497: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:38.497: INFO: dns-controller-867784b75c-fs862 started at 2023-01-03 12:07:55 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:38.497: INFO: Container dns-controller ready: true, restart count 0 Jan 3 12:24:38.497: INFO: kube-controller-manager-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:38.497: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 3 12:24:38.497: INFO: kube-scheduler-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:38.497: INFO: Container kube-scheduler ready: true, restart count 0 Jan 3 12:24:38.497: INFO: etcd-manager-events-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:38.497: INFO: Container etcd-manager ready: true, restart count 0 Jan 3 12:24:38.497: INFO: kops-controller-54wzq started at 2023-01-03 12:07:54 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:38.497: INFO: Container kops-controller ready: true, restart count 0 Jan 3 12:24:38.497: INFO: kube-apiserver-ip-172-20-50-77.ap-northeast-2.compute.internal started at 2023-01-03 12:06:53 +0000 UTC (0+2 container statuses recorded) Jan 3 12:24:38.497: INFO: Container healthcheck ready: true, restart count 0 Jan 3 12:24:38.497: INFO: Container kube-apiserver ready: true, restart count 1 Jan 3 12:24:38.497: INFO: ebs-csi-node-5hfnh started at 2023-01-03 12:07:53 +0000 UTC (0+3 container statuses recorded) Jan 3 12:24:38.497: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:24:38.497: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:38.497: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:24:38.497: INFO: cilium-jrcck started at 2023-01-03 12:07:53 +0000 UTC (1+1 container statuses recorded) Jan 3 12:24:38.497: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:24:38.497: INFO: Container cilium-agent ready: true, restart count 1 Jan 3 12:24:39.077: INFO: Latency metrics for node ip-172-20-50-77.ap-northeast-2.compute.internal Jan 3 12:24:39.077: INFO: Logging node info for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:24:39.254: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-52-37.ap-northeast-2.compute.internal aad66781-b33c-418b-b9d9-bf279890bb2f 20086 0 2023-01-03 12:09:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-2 failure-domain.beta.kubernetes.io/zone:ap-northeast-2a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-52-37.ap-northeast-2.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-2a topology.kubernetes.io/region:ap-northeast-2 topology.kubernetes.io/zone:ap-northeast-2a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-060b35b9149f1ba66"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-03 12:09:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-03 12:17:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-03 12:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-2a/i-060b35b9149f1ba66,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{49753808896 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4051644416 0} {<nil>} 3956684Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{44778427933 0} {<nil>} 44778427933 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3946786816 0} {<nil>} 3854284Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-03 12:20:42 +0000 UTC,LastTransitionTime:2023-01-03 12:09:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.52.37,},NodeAddress{Type:ExternalIP,Address:3.38.101.72,},NodeAddress{Type:Hostname,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-52-37.ap-northeast-2.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-3-38-101-72.ap-northeast-2.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2e82df6bf35f54d2aca0b4e9167917,SystemUUID:ec2e82df-6bf3-5f54-d2ac-a0b4e9167917,BootID:53771566-a9e2-4e41-a8e2-f140d3f619b9,KernelVersion:5.15.0-1026-aws,OSImage:Ubuntu 22.04.1 LTS,ContainerRuntimeVersion:containerd://1.6.14,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:166719855,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:114247223,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/library/nginx@sha256:0047b729188a15da49380d9506d65959cce6d40291ccfb4e039f5dc7efd33286 docker.io/library/nginx:latest],SizeBytes:56882284,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:10e231cf07c82dfab5141c1afe548dc734e5fc1f67665ec3982a325d1bd31a9a registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.0],SizeBytes:29725756,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:8240918,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0432a2d2bd3b2dc32 kubernetes.io/csi/ebs.csi.aws.com^vol-04dfa5d672d819a83],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0432a2d2bd3b2dc32,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-04dfa5d672d819a83,DevicePath:,},},Config:nil,},} Jan 3 12:24:39.254: INFO: Logging kubelet events for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:24:39.433: INFO: Logging pods the kubelet thinks is on node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:24:39.619: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-fdrvf started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.619: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:39.619: INFO: inline-volume-tester-mmvnp started at 2023-01-03 12:20:40 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.619: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 3 12:24:39.619: INFO: webserver-deployment-5d9fdcc779-zvnv5 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.619: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:39.619: INFO: pod-e8e32ffc-f03a-43d1-8ea3-72be400f27cc started at 2023-01-03 12:24:12 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.619: INFO: Container test-container ready: false, restart count 0 Jan 3 12:24:39.619: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2kwc4 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.619: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:39.619: INFO: test-ss-0 started at 2023-01-03 12:23:08 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.619: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:39.619: INFO: netserver-3 started at 2023-01-03 12:24:02 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.619: INFO: Container webserver ready: false, restart count 0 Jan 3 12:24:39.619: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-25x24 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.619: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:39.620: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgkkd started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:39.620: INFO: sample-apiserver-deployment-7cdc9f5bf7-tmj6p started at 2023-01-03 12:23:11 +0000 UTC (0+2 container statuses recorded) Jan 3 12:24:39.620: INFO: Container etcd ready: false, restart count 0 Jan 3 12:24:39.620: INFO: Container sample-apiserver ready: false, restart count 0 Jan 3 12:24:39.620: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-k28rp started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:39.620: INFO: netserver-3 started at 2023-01-03 12:20:03 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container webserver ready: true, restart count 0 Jan 3 12:24:39.620: INFO: netserver-3 started at 2023-01-03 12:20:13 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container webserver ready: true, restart count 0 Jan 3 12:24:39.620: INFO: sample-crd-conversion-webhook-deployment-67c86bcf4b-bkg7c started at 2023-01-03 12:23:19 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container sample-crd-conversion-webhook ready: false, restart count 0 Jan 3 12:24:39.620: INFO: hostpath-symlink-prep-volume-6653 started at 2023-01-03 12:24:05 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container init-volume-volume-6653 ready: false, restart count 0 Jan 3 12:24:39.620: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:39.620: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vs6ts started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:39.620: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-xxg5r started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:39.620: INFO: ebs-csi-node-qg9wn started at 2023-01-03 12:09:19 +0000 UTC (0+3 container statuses recorded) Jan 3 12:24:39.620: INFO: Container ebs-plugin ready: true, restart count 0 Jan 3 12:24:39.620: INFO: Container liveness-probe ready: true, restart count 0 Jan 3 12:24:39.620: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 3 12:24:39.620: INFO: cilium-8n4td started at 2023-01-03 12:09:19 +0000 UTC (1+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 3 12:24:39.620: INFO: Container cilium-agent ready: true, restart count 0 Jan 3 12:24:39.620: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgpzl started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:39.620: INFO: webserver-deployment-5d9fdcc779-tws27 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:39.620: INFO: webserver-deployment-5d9fdcc779-nwlm7 started at 2023-01-03 12:20:52 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container httpd ready: false, restart count 0 Jan 3 12:24:39.620: INFO: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-5h749 started at 2023-01-03 12:17:51 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf ready: true, restart count 0 Jan 3 12:24:39.620: INFO: pod1 started at 2023-01-03 12:24:07 +0000 UTC (0+1 container statuses recorded) Jan 3 12:24:39.620: INFO: Container agnhost-container ready: false, restart count 0 Jan 3 12:24:40.223: INFO: Latency metrics for node ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:24:40.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-6292" for this suite.
Find pod-configmaps-fad53cb0-ed00-453d-94fc-29aba9c7f521 mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\skubelet\sClean\sup\spods\son\snode\skubelet\sshould\sbe\sable\sto\sdelete\s10\spods\sper\snode\sin\s1m0s\.$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 Jan 3 12:25:01.515: Unexpected error: <*errors.errorString | 0xc0037ef700>: { s: "only 38 pods started out of 40", } only 38 pods started out of 40 occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:354
[BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 3 12:17:48.358: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename kubelet �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:274 [BeforeEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:295 [It] kubelet should be able to delete 10 pods per node in 1m0s. /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:341 �[1mSTEP�[0m: Creating a RC of 40 pods and wait until all pods of this RC are running �[1mSTEP�[0m: creating replication controller cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf in namespace kubelet-1443 I0103 12:17:51.052317 6649 runners.go:193] Created replication controller with name: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf, namespace: kubelet-1443, replica count: 40 Jan 3 12:17:51.093: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:17:51.093: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:17:51.267: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:17:51.268: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:17:51.664: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:17:56.314: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:17:56.325: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:17:56.476: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:17:56.511: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:17:56.922: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:18:01.304036 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 11 running, 29 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:18:01.539: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:18:01.704: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:18:01.783: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:18:01.896: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:18:02.175: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:18:06.764: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:18:06.919: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:18:07.022: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:18:07.108: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:18:07.428: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:18:11.304539 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 27 running, 13 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:18:11.988: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:18:12.139: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:18:12.247: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:18:12.324: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:18:12.709: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:18:17.209: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:18:17.347: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:18:17.506: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:18:17.551: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:18:17.948: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:18:21.306850 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 30 running, 10 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:18:22.427: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:18:22.552: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:18:22.724: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:18:22.817: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:18:23.183: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:18:27.653: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:18:27.765: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:18:27.969: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:18:28.076: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:18:28.449: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:18:31.307264 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 33 running, 7 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:18:32.903: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:18:32.971: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:18:33.298: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:18:33.419: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:18:33.720: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:18:38.124: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:18:38.177: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:18:38.510: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:18:38.646: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:18:38.962: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:18:41.307705 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 35 running, 5 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:18:43.350: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:18:43.384: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:18:43.736: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:18:43.898: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:18:44.182: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:18:48.569: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:18:48.590: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:18:48.948: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:18:49.136: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:18:49.446: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:18:51.308901 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 35 running, 5 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:18:53.787: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:18:53.802: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:18:54.180: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:18:54.357: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:18:54.670: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:18:59.006: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:18:59.121: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:18:59.404: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:18:59.577: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:18:59.884: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:19:01.309380 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 37 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:19:04.212: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:19:04.336: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:19:04.613: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:19:04.803: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:19:05.123: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:19:09.421: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:19:09.553: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:19:09.884: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:19:10.035: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:19:10.345: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:19:11.309853 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 37 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:19:14.628: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:19:14.830: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:19:15.101: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:19:15.279: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:19:15.564: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:19:19.831: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:19:20.047: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:19:20.327: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:19:20.537: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:19:20.778: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:19:21.310571 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 37 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:19:25.041: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:19:25.299: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:19:25.541: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:19:25.770: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:19:25.991: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:19:30.254: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:19:30.503: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:19:30.757: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:19:31.003: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:19:31.228: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:19:31.311306 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 37 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:19:35.460: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:19:35.716: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:19:35.969: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:19:36.349: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:19:36.458: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:19:40.666: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:19:40.928: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:19:41.194: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" I0103 12:19:41.311655 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 37 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:19:41.592: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:19:41.664: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:19:45.873: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:19:46.140: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:19:46.410: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:19:46.827: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:19:46.877: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:19:51.078: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" I0103 12:19:51.312933 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 37 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:19:51.357: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:19:51.650: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:19:52.054: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:19:52.139: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:19:56.287: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:19:56.571: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:19:56.862: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:19:57.315: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:19:57.380: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:20:01.314275 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:20:01.495: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:20:01.781: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:20:02.082: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:20:02.588: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:20:02.741: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:20:06.699: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:20:07.053: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:20:07.286: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:20:07.828: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:20:07.993: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:20:11.314691 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:20:11.908: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:20:12.289: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:20:12.506: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:20:13.059: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:20:13.212: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:20:17.113: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:20:17.503: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:20:17.760: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:20:18.287: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:20:18.425: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:20:21.316996 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:20:22.328: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:20:22.723: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:20:22.976: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:20:23.522: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:20:23.633: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:20:27.536: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:20:27.930: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:20:28.188: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:20:28.755: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:20:28.843: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:20:31.317539 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:20:32.747: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:20:33.142: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:20:33.399: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:20:34.012: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:20:34.056: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:20:37.954: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:20:38.357: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:20:38.615: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:20:39.241: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:20:39.269: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:20:41.318233 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:20:43.161: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:20:43.561: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:20:43.860: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:20:44.483: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:20:44.545: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:20:48.367: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:20:48.788: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:20:49.073: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:20:49.697: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:20:49.777: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:20:51.318574 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:20:53.578: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:20:53.995: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:20:54.304: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:20:54.912: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:20:55.019: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:20:58.783: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:20:59.205: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:20:59.517: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:21:00.187: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:21:00.385: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:21:01.318926 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:21:03.991: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:21:04.423: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:21:04.740: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:21:05.420: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:21:05.656: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:21:09.197: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:21:09.640: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:21:09.957: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:21:10.711: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:21:10.889: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:21:11.319265 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:21:14.401: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:21:14.850: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:21:15.171: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:21:15.927: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:21:16.139: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:21:19.606: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:21:20.060: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:21:20.390: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:21:21.151: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:21:21.320038 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:21:21.371: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:21:24.812: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:21:25.268: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:21:25.602: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:21:26.368: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:21:26.600: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:21:30.025: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:21:30.507: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:21:30.845: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" I0103 12:21:31.320440 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:21:31.584: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:21:31.830: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:21:35.230: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:21:35.718: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:21:36.075: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:21:36.807: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:21:37.054: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:21:40.437: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:21:40.931: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:21:41.290: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" I0103 12:21:41.321399 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:21:42.023: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:21:42.281: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:21:45.637: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:21:46.142: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:21:46.500: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:21:47.300: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:21:47.590: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:21:50.842: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" I0103 12:21:51.321981 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:21:51.355: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:21:51.729: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:21:52.551: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:21:52.875: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:21:56.048: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:21:56.565: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:21:56.940: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:21:57.766: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:21:58.117: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:22:01.257: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" I0103 12:22:01.322562 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:22:01.782: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:22:02.196: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:22:02.985: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:22:03.380: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:22:06.464: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:22:07.006: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:22:07.442: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:22:08.205: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:22:08.610: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:22:11.322929 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:22:11.669: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:22:12.216: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:22:12.679: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:22:13.421: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:22:13.830: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:22:16.879: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:22:17.427: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:22:17.904: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:22:18.636: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:22:19.065: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:22:21.325179 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:22:22.086: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:22:22.667: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:22:23.147: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:22:23.884: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:22:24.311: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:22:27.289: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:22:27.886: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:22:28.374: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:22:29.098: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:22:29.577: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:22:31.326062 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:22:32.494: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:22:33.103: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:22:33.589: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:22:34.320: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:22:34.806: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:22:37.698: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:22:38.312: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:22:38.803: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:22:39.553: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:22:40.072: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:22:41.326580 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:22:42.925: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:22:43.522: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:22:44.016: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:22:44.764: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:22:45.315: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:22:48.133: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:22:48.744: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:22:49.264: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:22:50.003: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:22:50.610: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:22:51.326905 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:22:53.340: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:22:53.962: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:22:54.507: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:22:55.255: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:22:55.837: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:22:58.543: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:22:59.183: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:22:59.731: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:23:00.470: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:23:01.112: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:23:01.327678 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:23:03.751: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:23:04.391: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:23:04.946: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:23:05.692: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:23:06.326: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:23:08.960: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:23:09.600: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:23:10.170: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:23:10.932: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:23:11.328038 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:23:11.564: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:23:14.167: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:23:14.820: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:23:15.394: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:23:16.169: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:23:16.790: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:23:19.372: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:23:20.029: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:23:20.615: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" I0103 12:23:21.329081 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:23:21.380: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:23:22.022: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:23:24.575: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:23:25.245: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:23:25.830: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:23:26.596: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:23:27.259: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:23:29.779: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:23:30.453: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:23:31.093: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" I0103 12:23:31.330203 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:23:31.816: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:23:32.497: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:23:34.985: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:23:35.667: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:23:36.312: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:23:37.031: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:23:37.728: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:23:40.193: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:23:40.887: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" I0103 12:23:41.330795 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:23:41.540: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:23:42.253: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:23:42.956: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:23:45.397: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:23:46.111: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:23:46.755: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:23:47.471: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:23:48.193: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:23:50.604: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:23:51.317: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" I0103 12:23:51.331292 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:23:51.979: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:23:52.674: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:23:53.455: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:23:55.819: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:23:56.531: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:23:57.203: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:23:57.910: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:23:58.677: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:24:01.026: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" I0103 12:24:01.331836 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:24:01.750: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:24:02.416: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:24:03.126: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:24:03.910: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:24:06.235: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:24:07.024: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:24:07.628: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:24:08.341: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:24:09.134: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:24:11.333115 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:24:11.441: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:24:12.235: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:24:12.846: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:24:13.563: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:24:14.360: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:24:16.648: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:24:17.443: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:24:18.056: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:24:18.778: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:24:19.592: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:24:21.334206 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:24:21.863: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:24:22.658: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:24:23.273: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:24:24.015: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:24:24.820: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:24:27.077: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:24:27.871: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:24:28.482: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:24:29.238: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:24:30.061: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:24:31.334653 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:24:32.288: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:24:33.122: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:24:33.695: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:24:34.480: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:24:35.287: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:24:37.491: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:24:38.337: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:24:38.911: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:24:39.705: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:24:40.521: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:24:41.335000 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:24:42.702: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:24:43.560: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:24:44.189: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:24:44.926: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:24:45.757: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:24:47.906: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:24:48.786: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:24:49.401: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:24:50.154: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:24:50.979: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" I0103 12:24:51.335285 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 3 12:24:53.114: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:24:54.000: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:24:54.615: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:24:55.438: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" Jan 3 12:24:56.221: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" Jan 3 12:24:58.318: INFO: Missing info/stats for container "runtime" on node "ip-172-20-50-77.ap-northeast-2.compute.internal" Jan 3 12:24:59.245: INFO: Missing info/stats for container "runtime" on node "ip-172-20-52-37.ap-northeast-2.compute.internal" Jan 3 12:24:59.830: INFO: Missing info/stats for container "runtime" on node "ip-172-20-33-54.ap-northeast-2.compute.internal" Jan 3 12:25:00.677: INFO: Missing info/stats for container "runtime" on node "ip-172-20-37-52.ap-northeast-2.compute.internal" I0103 12:25:01.335546 6649 runners.go:193] cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Pods: 40 out of 40 created, 38 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0103 12:25:01.514656 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-25x24 ip-172-20-52-37.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514768 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2g27g ip-172-20-33-54.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514783 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2hpz5 ip-172-20-33-54.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514798 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2kwc4 ip-172-20-52-37.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514809 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4fwrt ip-172-20-33-54.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514819 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4jx8k ip-172-20-48-181.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514829 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4rndx ip-172-20-33-54.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514838 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-5h749 ip-172-20-52-37.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514849 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-7k7mx ip-172-20-37-52.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514859 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-8wctd ip-172-20-33-54.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514869 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-b9462 ip-172-20-48-181.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514879 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-c9vtv ip-172-20-48-181.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514908 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-czr66 ip-172-20-48-181.ap-northeast-2.compute.internal Pending <nil> I0103 12:25:01.514919 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-d9qh7 ip-172-20-37-52.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514929 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-dkdq8 ip-172-20-33-54.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514939 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-fdrvf ip-172-20-52-37.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514952 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-g8zzx ip-172-20-37-52.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514962 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-gpxqr ip-172-20-48-181.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514972 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h2gmg ip-172-20-48-181.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514985 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h84k7 ip-172-20-33-54.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.514995 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-j8lbc ip-172-20-37-52.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515005 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jtqtg ip-172-20-33-54.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515017 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jzf8m ip-172-20-37-52.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515027 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-k28rp ip-172-20-52-37.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515037 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-kkt4f ip-172-20-37-52.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515047 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-l9xkv ip-172-20-37-52.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515058 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-ldphj ip-172-20-48-181.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515068 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-m9mmt ip-172-20-33-54.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515078 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-nkxzd ip-172-20-48-181.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515088 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-p2j42 ip-172-20-48-181.ap-northeast-2.compute.internal Pending <nil> I0103 12:25:01.515099 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-pjqlp ip-172-20-37-52.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515120 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgkkd ip-172-20-52-37.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515140 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgpzl ip-172-20-52-37.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515152 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-v2c2j ip-172-20-48-181.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515162 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vplwx ip-172-20-33-54.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515172 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vrhrc ip-172-20-37-52.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515183 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vs6ts ip-172-20-52-37.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515200 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-w42zg ip-172-20-37-52.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515212 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-xxg5r ip-172-20-52-37.ap-northeast-2.compute.internal Running <nil> I0103 12:25:01.515230 6649 runners.go:193] Pod cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5 ip-172-20-52-37.ap-northeast-2.compute.internal Running <nil> Jan 3 12:25:01.515: FAIL: Unexpected error: <*errors.errorString | 0xc0037ef700>: { s: "only 38 pods started out of 40", } only 38 pods started out of 40 occurred Full Stack Trace k8s.io/kubernetes/test/e2e/node.glob..func5.2.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:354 +0x390 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0007649c0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] Clean up pods on node /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:326 �[1mSTEP�[0m: removing the label kubelet_cleanup off the node ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:01.572: INFO: Missing info/stats for container "runtime" on node "ip-172-20-48-181.ap-northeast-2.compute.internal" �[1mSTEP�[0m: verifying the node doesn't have the label kubelet_cleanup �[1mSTEP�[0m: removing the label kubelet_cleanup off the node ip-172-20-48-181.ap-northeast-2.compute.internal �[1mSTEP�[0m: verifying the node doesn't have the label kubelet_cleanup �[1mSTEP�[0m: removing the label kubelet_cleanup off the node ip-172-20-52-37.ap-northeast-2.compute.internal �[1mSTEP�[0m: verifying the node doesn't have the label kubelet_cleanup �[1mSTEP�[0m: removing the label kubelet_cleanup off the node ip-172-20-33-54.ap-northeast-2.compute.internal �[1mSTEP�[0m: verifying the node doesn't have the label kubelet_cleanup [AfterEach] [sig-node] kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "kubelet-1443". �[1mSTEP�[0m: Found 221 events. Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:50 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf: {replication-controller } SuccessfulCreate: Created pod: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-b9462 Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:50 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf: {replication-controller } SuccessfulCreate: Created pod: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jtqtg Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf: {replication-controller } SuccessfulCreate: Created pod: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-fdrvf Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf: {replication-controller } SuccessfulCreate: Created pod: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-xxg5r Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf: {replication-controller } SuccessfulCreate: Created pod: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-c9vtv Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf: {replication-controller } SuccessfulCreate: Created pod: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-dkdq8 Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf: {replication-controller } SuccessfulCreate: Created pod: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5 Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf: {replication-controller } SuccessfulCreate: Created pod: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-g8zzx Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf: {replication-controller } SuccessfulCreate: Created pod: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-kkt4f Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-25x24: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-25x24 to ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2g27g: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2g27g to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2hpz5: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2hpz5 to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2kwc4: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2kwc4 to ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4fwrt: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4fwrt to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4jx8k: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4jx8k to ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4rndx: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4rndx to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-5h749: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-5h749 to ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-7k7mx: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-7k7mx to ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-8wctd: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-8wctd to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-b9462: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-b9462 to ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-c9vtv: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-c9vtv to ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-czr66: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-czr66 to ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-d9qh7: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-d9qh7 to ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-dkdq8: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-dkdq8 to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-fdrvf: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-fdrvf to ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-g8zzx: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-g8zzx to ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-gpxqr: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-gpxqr to ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h2gmg: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h2gmg to ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h84k7: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h84k7 to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-j8lbc: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-j8lbc to ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jtqtg: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jtqtg to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jzf8m: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jzf8m to ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-k28rp: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-k28rp to ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-kkt4f: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-kkt4f to ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-l9xkv: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-l9xkv to ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-ldphj: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-ldphj to ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-m9mmt: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-m9mmt to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-nkxzd: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-nkxzd to ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-p2j42: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-p2j42 to ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-pjqlp: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-pjqlp to ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgkkd: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgkkd to ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgpzl: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgpzl to ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-v2c2j: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-v2c2j to ip-172-20-48-181.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vplwx: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vplwx to ip-172-20-33-54.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vrhrc: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vrhrc to ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vs6ts: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vs6ts to ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-w42zg: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-w42zg to ip-172-20-37-52.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-xxg5r: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-xxg5r to ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:51 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5: {default-scheduler } Scheduled: Successfully assigned kubelet-1443/cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5 to ip-172-20-52-37.ap-northeast-2.compute.internal Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf: {replication-controller } SuccessfulCreate: (combined from similar events): Created pod: cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-ldphj Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-25x24: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-8wctd: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-8wctd: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-b9462: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-b9462: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-fdrvf: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-fdrvf: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-fdrvf: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jtqtg: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jtqtg: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.013: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jtqtg: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-k28rp: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-k28rp: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:52 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-k28rp: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-25x24: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-25x24: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2hpz5: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2hpz5: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2kwc4: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "642091b8411920b6eb034962326922fa5afb0a19c5944be439c3f6a9bedf6602": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4jx8k: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c0062245692f9dc1f33306c810a153e3b9c6abae3dcbde07eecdd3d58f7d5375": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-5h749: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-5h749: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-5h749: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-8wctd: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-b9462: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-c9vtv: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-c9vtv: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-c9vtv: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-czr66: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "76ff798d64b4c32ad1d883862b92d1d4ad778b1fc72ebb86b160d3462efa6b9c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-dkdq8: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-dkdq8: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h2gmg: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h2gmg: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h84k7: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h84k7: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h84k7: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-nkxzd: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-nkxzd: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgkkd: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "8f655a6ad46e0b4296b2ae3fc650e0ba8eb7d1c36ee0018a0ffb5751d3c9aa87": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgpzl: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "48b0450c6a54b70939564d9c33299c0e6e789233534711916c6ce7df676a1155": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-v2c2j: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "495b7fa12ab0d1e0602be02d64f04caab07339a2baf62e9d29f9ba39c561a904": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vs6ts: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vs6ts: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vs6ts: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-xxg5r: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-xxg5r: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-xxg5r: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:53 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "74bdb9aa3d4fd2c2bb6e20aadf81c010c08cfe63af9d528769f4dc00b33ccf1f": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2g27g: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2g27g: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2g27g: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2hpz5: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4fwrt: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4fwrt: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4fwrt: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-dkdq8: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-g8zzx: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-g8zzx: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-gpxqr: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-gpxqr: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-gpxqr: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-h2gmg: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-ldphj: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-ldphj: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-ldphj: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-nkxzd: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-p2j42: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f56c85c54249bd260bbd3c80bb75baf5c241c8fef9d0afa15a55b7a674689494": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:54 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vplwx: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1716b23d04b8640f6175334729be5f7ca34a94ccf5222dd0708dc447e202aadd": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4rndx: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "0b6fd842c5d3c55d1994047a4ed85e0c93284e8cbffcda4fd9f48eda656f6e07": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-7k7mx: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-7k7mx: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-7k7mx: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-d9qh7: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-d9qh7: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-d9qh7: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-g8zzx: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-j8lbc: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e2f430ba32d22cb7b00f19063136ce53bfa254814574cd7ddefe382d0842c03a": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jzf8m: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "d76302de7525152419b81bf2b2a09a7a34cd07171559260479c4bee9f735fdf8": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-kkt4f: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "29856c80907ba9561be41ecaa5939bb846ab563b40285e80a0f0fabcad1239a5": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-m9mmt: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f3c2315c1ae3ef8ae37f1f14e66bbf73fd4c81c50f54d51ce752abd47fd1ab62": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-pjqlp: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "2832063029e125ee4c34967f8ab688c2edea04bf84caf22cc11043464045ff6f": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vrhrc: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vrhrc: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:55 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vrhrc: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:56 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-l9xkv: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b686106dc247d5ddac282f7677a7aacfa6d1b4a33d004669c3d7bc1d6eec4cca": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:17:56 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-w42zg: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "c5b15f877964c1db1032944b6e6f6c456b493102dfba93c6debaf4f11bb282a0": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:04 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4jx8k: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:04 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4jx8k: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:04 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4jx8k: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:05 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgkkd: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:05 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgkkd: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:05 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-v2c2j: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "f5fdc70e918c3fd56f481ebd0b6803f2a9c5197a490d07ed1202953be63bda75": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:06 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-l9xkv: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "ccd12d507081e1d1403319df5af940a5a80f9e68c333b10581727244ee9fc27f": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:06 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgkkd: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:06 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vplwx: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:06 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vplwx: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:06 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-vplwx: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:06 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:07 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:07 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-z56f5: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:08 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-2kwc4: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "99717de73578d9eaccde5df06a781d92a750be5f368d42c92d63f4c123f9045f": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:08 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-m9mmt: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "59a275d14438d1f248274cba1a238da1308c658d763b2b60053dc0d5743caa34": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:08 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-sgpzl: {kubelet ip-172-20-52-37.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7c3cf40ecb9ca56a86d813aa99f53969eb48975f2364dddad82dac4e768792c3": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:09 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-4rndx: {kubelet ip-172-20-33-54.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "161889bb38bd4f08470f08e8d4e3e9de896427fb73928e352a4d3ecd503d8fff": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:09 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-czr66: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7298c28ac688e347ddd14b0f1b26f3d8ca118073c3121fe9dbbb2652f69835c0": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:09 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-j8lbc: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "3e7ad06dfcaeb4c3ff363d363094aa443de311f023607b6830775cad7954a56c": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:09 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jzf8m: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Started: Started container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:09 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jzf8m: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Created: Created container cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:09 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-jzf8m: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:09 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-p2j42: {kubelet ip-172-20-48-181.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a65e338515ae2adc5761e114a7af42d907c0de85a446bef0acf60e76c775c4ad": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:10 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-kkt4f: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "a629d98931335dfe9fc4132ebbb4b79aa389432cb73fb0c6678118c3ee2250ea": plugin type="cilium-cni" name="cilium" failed (add): unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 3 12:25:04.014: INFO: At 2023-01-03 12:18:10 +0000 UTC - event for cleanup40-ef63505d-2d8c-487a-84d8-7e2db7886baf-pjqlp: {kubelet ip-172-20-37-52.ap-northeast-2.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = faile