Result | FAILURE |
Tests | 41 failed / 746 succeeded |
Started | |
Elapsed | 46m11s |
Revision | master |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\sExternal\sStorage\s\[Driver\:\sebs\.csi\.aws\.com\]\s\[Testpattern\:\sDynamic\sPV\s\(ext4\)\]\svolumes\sshould\sallow\sexec\sof\sfiles\son\sthe\svolume$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196 Jan 15 02:52:27.627: Unexpected error: <*errors.errorString | 0xc003882400>: { s: "expected pod \"exec-volume-test-dynamicpv-t4jj\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-dynamicpv-t4jj\" to be \"Succeeded or Failed\"", } expected pod "exec-volume-test-dynamicpv-t4jj" success: Gave up after waiting 5m0s for pod "exec-volume-test-dynamicpv-t4jj" to be "Succeeded or Failed" occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:770from junit_12.xml
[BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 15 02:47:18.473: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename volume �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow exec of files on the volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:196 Jan 15 02:47:19.504: INFO: Creating resource for dynamic PV Jan 15 02:47:19.504: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass volume-2932-e2e-scknxwt �[1mSTEP�[0m: creating a claim Jan 15 02:47:19.653: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil �[1mSTEP�[0m: Creating pod exec-volume-test-dynamicpv-t4jj �[1mSTEP�[0m: Creating a pod to test exec-volume-test Jan 15 02:47:20.099: INFO: Waiting up to 5m0s for pod "exec-volume-test-dynamicpv-t4jj" in namespace "volume-2932" to be "Succeeded or Failed" Jan 15 02:47:20.246: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 147.04789ms Jan 15 02:47:22.393: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294697184s Jan 15 02:47:24.545: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.44593518s Jan 15 02:47:26.693: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.593920784s Jan 15 02:47:28.840: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 8.741205748s Jan 15 02:47:30.987: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 10.888696567s Jan 15 02:47:33.140: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 13.041371506s Jan 15 02:47:35.289: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 15.190126478s Jan 15 02:47:37.441: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 17.342415019s Jan 15 02:47:39.588: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 19.489627193s Jan 15 02:47:41.736: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 21.637804891s Jan 15 02:47:43.884: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 23.785462055s Jan 15 02:47:46.031: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 25.932848922s Jan 15 02:47:48.180: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 28.081607135s Jan 15 02:47:50.329: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 30.229921656s Jan 15 02:47:52.477: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 32.377964915s Jan 15 02:47:54.624: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 34.525287634s Jan 15 02:47:56.772: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 36.673135202s Jan 15 02:47:58.919: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 38.820536333s Jan 15 02:48:01.068: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 40.969566001s Jan 15 02:48:03.217: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 43.118537756s Jan 15 02:48:05.365: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 45.266042602s Jan 15 02:48:07.514: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 47.414910746s Jan 15 02:48:09.661: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 49.562429288s Jan 15 02:48:11.810: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 51.711119941s Jan 15 02:48:13.957: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 53.858807576s Jan 15 02:48:16.105: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 56.006246144s Jan 15 02:48:18.253: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 58.154710199s Jan 15 02:48:20.401: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.30228078s Jan 15 02:48:22.549: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.450530531s Jan 15 02:48:24.698: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m4.599055157s Jan 15 02:48:26.853: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.754292156s Jan 15 02:48:29.001: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.902617948s Jan 15 02:48:31.150: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.05152144s Jan 15 02:48:33.298: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.199681445s Jan 15 02:48:35.446: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m15.347449244s Jan 15 02:48:37.595: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m17.496214341s Jan 15 02:48:39.743: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.644850607s Jan 15 02:48:41.892: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.793288092s Jan 15 02:48:44.040: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.941521386s Jan 15 02:48:46.189: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.08991295s Jan 15 02:48:48.336: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.237462378s Jan 15 02:48:50.484: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.385784306s Jan 15 02:48:52.633: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.534742459s Jan 15 02:48:54.782: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.682926316s Jan 15 02:48:56.936: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.837239857s Jan 15 02:48:59.084: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.985015547s Jan 15 02:49:01.232: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m41.133467391s Jan 15 02:49:03.381: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m43.282000364s Jan 15 02:49:05.528: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m45.429579358s Jan 15 02:49:07.676: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m47.577448831s Jan 15 02:49:09.828: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m49.729034319s Jan 15 02:49:11.975: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m51.876824805s Jan 15 02:49:14.125: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m54.025924944s Jan 15 02:49:16.273: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.174547509s Jan 15 02:49:18.422: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.323208788s Jan 15 02:49:20.570: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.471183913s Jan 15 02:49:22.722: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.623494128s Jan 15 02:49:24.871: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m4.772025312s Jan 15 02:49:27.018: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m6.919609098s Jan 15 02:49:29.167: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m9.067978803s Jan 15 02:49:31.315: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m11.215906248s Jan 15 02:49:33.463: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m13.364732097s Jan 15 02:49:35.612: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m15.513102523s Jan 15 02:49:37.760: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m17.661417976s Jan 15 02:49:39.908: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m19.809400376s Jan 15 02:49:42.057: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m21.957996343s Jan 15 02:49:44.204: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m24.105454566s Jan 15 02:49:46.353: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m26.254084575s Jan 15 02:49:48.502: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m28.402947478s Jan 15 02:49:50.650: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m30.551673015s Jan 15 02:49:52.799: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m32.700235108s Jan 15 02:49:54.948: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m34.849042566s Jan 15 02:49:57.096: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m36.997799246s Jan 15 02:49:59.245: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m39.146283383s Jan 15 02:50:01.394: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m41.29560481s Jan 15 02:50:03.543: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m43.444211608s Jan 15 02:50:05.690: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m45.591854269s Jan 15 02:50:07.838: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m47.739419487s Jan 15 02:50:09.987: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m49.888238569s Jan 15 02:50:12.136: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m52.037001076s Jan 15 02:50:14.283: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m54.184863381s Jan 15 02:50:16.432: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m56.333599726s Jan 15 02:50:18.580: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 2m58.481650892s Jan 15 02:50:20.728: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m0.62965926s Jan 15 02:50:22.877: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m2.778217449s Jan 15 02:50:25.024: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m4.925829431s Jan 15 02:50:27.172: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m7.073490115s Jan 15 02:50:29.320: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m9.221691559s Jan 15 02:50:31.468: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m11.369798421s Jan 15 02:50:33.616: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m13.517646552s Jan 15 02:50:35.765: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m15.666054646s Jan 15 02:50:37.913: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m17.813998833s Jan 15 02:50:40.060: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m19.961484578s Jan 15 02:50:42.208: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m22.109690611s Jan 15 02:50:44.357: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m24.25801678s Jan 15 02:50:46.504: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m26.405334261s Jan 15 02:50:48.652: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m28.553826889s Jan 15 02:50:50.801: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m30.702309648s Jan 15 02:50:52.949: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m32.850799921s Jan 15 02:50:55.097: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m34.998611959s Jan 15 02:50:57.246: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m37.147510642s Jan 15 02:50:59.395: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m39.296227909s Jan 15 02:51:01.544: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m41.445403926s Jan 15 02:51:03.693: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m43.593945519s Jan 15 02:51:05.840: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m45.74153251s Jan 15 02:51:07.989: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m47.8899278s Jan 15 02:51:10.137: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m50.038068495s Jan 15 02:51:12.284: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m52.185777658s Jan 15 02:51:14.433: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m54.334430033s Jan 15 02:51:16.583: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m56.483924079s Jan 15 02:51:18.731: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 3m58.632232817s Jan 15 02:51:20.879: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m0.780058647s Jan 15 02:51:23.026: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m2.927824544s Jan 15 02:51:25.174: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m5.075392597s Jan 15 02:51:27.322: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m7.223673903s Jan 15 02:51:29.471: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m9.372306869s Jan 15 02:51:31.620: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m11.521119998s Jan 15 02:51:33.768: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m13.669312214s Jan 15 02:51:35.916: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m15.817415858s Jan 15 02:51:38.065: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m17.965952419s Jan 15 02:51:40.212: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m20.113826275s Jan 15 02:51:42.361: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m22.262149228s Jan 15 02:51:44.509: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m24.409915861s Jan 15 02:51:46.657: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m26.558559013s Jan 15 02:51:48.809: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m28.710401308s Jan 15 02:51:50.957: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m30.85788628s Jan 15 02:51:53.104: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m33.005648904s Jan 15 02:51:55.253: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m35.154148501s Jan 15 02:51:57.402: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m37.303636799s Jan 15 02:51:59.551: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m39.45221057s Jan 15 02:52:01.699: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m41.600617851s Jan 15 02:52:03.847: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m43.748610476s Jan 15 02:52:05.996: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m45.897274046s Jan 15 02:52:08.144: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m48.045291828s Jan 15 02:52:10.292: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m50.193712442s Jan 15 02:52:12.440: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m52.341397269s Jan 15 02:52:14.588: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m54.48923815s Jan 15 02:52:16.737: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m56.637979424s Jan 15 02:52:18.884: INFO: Pod "exec-volume-test-dynamicpv-t4jj": Phase="Pending", Reason="", readiness=false. Elapsed: 4m58.78553492s Jan 15 02:52:21.181: INFO: Failed to get logs from node "ip-172-20-43-202.ap-northeast-1.compute.internal" pod "exec-volume-test-dynamicpv-t4jj" container "exec-container-dynamicpv-t4jj": the server rejected our request for an unknown reason (get pods exec-volume-test-dynamicpv-t4jj) �[1mSTEP�[0m: delete the pod Jan 15 02:52:21.332: INFO: Waiting for pod exec-volume-test-dynamicpv-t4jj to disappear Jan 15 02:52:21.479: INFO: Pod exec-volume-test-dynamicpv-t4jj still exists Jan 15 02:52:23.479: INFO: Waiting for pod exec-volume-test-dynamicpv-t4jj to disappear Jan 15 02:52:23.627: INFO: Pod exec-volume-test-dynamicpv-t4jj still exists Jan 15 02:52:25.479: INFO: Waiting for pod exec-volume-test-dynamicpv-t4jj to disappear Jan 15 02:52:25.627: INFO: Pod exec-volume-test-dynamicpv-t4jj still exists Jan 15 02:52:27.479: INFO: Waiting for pod exec-volume-test-dynamicpv-t4jj to disappear Jan 15 02:52:27.626: INFO: Pod exec-volume-test-dynamicpv-t4jj no longer exists Jan 15 02:52:27.627: FAIL: Unexpected error: <*errors.errorString | 0xc003882400>: { s: "expected pod \"exec-volume-test-dynamicpv-t4jj\" success: Gave up after waiting 5m0s for pod \"exec-volume-test-dynamicpv-t4jj\" to be \"Succeeded or Failed\"", } expected pod "exec-volume-test-dynamicpv-t4jj" success: Gave up after waiting 5m0s for pod "exec-volume-test-dynamicpv-t4jj" to be "Succeeded or Failed" occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc00146c360, {0x70f719c, 0x0}, 0xc0048f5000, 0x0, {0xc003223118, 0x1, 0x1}, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:770 +0x176 k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:567 k8s.io/kubernetes/test/e2e/storage/testsuites.testScriptInPod(0xc002228c60, {0x70d8c53, 0x0}, 0xc004340b40, 0xc003cb7140) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:255 +0x66b k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumesTestSuite).DefineTests.func4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:201 +0xb1 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008661a0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: Deleting pvc Jan 15 02:52:27.922: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.com6k7pp" Jan 15 02:52:28.071: INFO: Waiting up to 3m0s for PersistentVolume pvc-c1797cf3-7ae1-4a19-93d8-db32950d7962 to get deleted Jan 15 02:52:28.218: INFO: PersistentVolume pvc-c1797cf3-7ae1-4a19-93d8-db32950d7962 found and phase=Released (146.8714ms) Jan 15 02:52:33.366: INFO: PersistentVolume pvc-c1797cf3-7ae1-4a19-93d8-db32950d7962 found and phase=Released (5.29444838s) Jan 15 02:52:38.513: INFO: PersistentVolume pvc-c1797cf3-7ae1-4a19-93d8-db32950d7962 found and phase=Released (10.442057686s) Jan 15 02:52:43.664: INFO: PersistentVolume pvc-c1797cf3-7ae1-4a19-93d8-db32950d7962 was removed �[1mSTEP�[0m: Deleting sc [AfterEach] [Testpattern: Dynamic PV (ext4)] volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "volume-2932". �[1mSTEP�[0m: Found 17 events. Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:19 +0000 UTC - event for ebs.csi.aws.com6k7pp: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:20 +0000 UTC - event for ebs.csi.aws.com6k7pp: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:20 +0000 UTC - event for ebs.csi.aws.com6k7pp: {ebs.csi.aws.com_ip-172-20-47-130_a755f206-d967-43ea-92fb-e7044f6cbd67 } Provisioning: External provisioner is provisioning volume for claim "volume-2932/ebs.csi.aws.com6k7pp" Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:23 +0000 UTC - event for ebs.csi.aws.com6k7pp: {ebs.csi.aws.com_ip-172-20-47-130_a755f206-d967-43ea-92fb-e7044f6cbd67 } ProvisioningSucceeded: Successfully provisioned volume pvc-c1797cf3-7ae1-4a19-93d8-db32950d7962 Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:24 +0000 UTC - event for exec-volume-test-dynamicpv-t4jj: {default-scheduler } Scheduled: Successfully assigned volume-2932/exec-volume-test-dynamicpv-t4jj to ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:28 +0000 UTC - event for exec-volume-test-dynamicpv-t4jj: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-c1797cf3-7ae1-4a19-93d8-db32950d7962" Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:30 +0000 UTC - event for exec-volume-test-dynamicpv-t4jj: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "cd0e27c29c8fce844b8104d14d8541b0a18741c60caadbe59308769bb6a330b0" network for pod "exec-volume-test-dynamicpv-t4jj": networkPlugin cni failed to set up pod "exec-volume-test-dynamicpv-t4jj_volume-2932" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:32 +0000 UTC - event for exec-volume-test-dynamicpv-t4jj: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:34 +0000 UTC - event for exec-volume-test-dynamicpv-t4jj: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "bcdeda2a28dcba7a6dc6d5921bf21a38ec9a1ebe7017d5329ce9ac6b206ae0c5" network for pod "exec-volume-test-dynamicpv-t4jj": networkPlugin cni failed to set up pod "exec-volume-test-dynamicpv-t4jj_volume-2932" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:40 +0000 UTC - event for exec-volume-test-dynamicpv-t4jj: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "39f93ec93fdee927d39047e9dfa662346d2d3a343fa0c2115e5bf97d0643483f" network for pod "exec-volume-test-dynamicpv-t4jj": networkPlugin cni failed to set up pod "exec-volume-test-dynamicpv-t4jj_volume-2932" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:45 +0000 UTC - event for exec-volume-test-dynamicpv-t4jj: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a42db978a4c78f42b20cb14ce3409c190dc84fed5a3ad208273f197701262c64" network for pod "exec-volume-test-dynamicpv-t4jj": networkPlugin cni failed to set up pod "exec-volume-test-dynamicpv-t4jj_volume-2932" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:48 +0000 UTC - event for exec-volume-test-dynamicpv-t4jj: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "752af737e8ea85aba016ac1e117ff759dfba7c94991bd76d30557621b976301d" network for pod "exec-volume-test-dynamicpv-t4jj": networkPlugin cni failed to set up pod "exec-volume-test-dynamicpv-t4jj_volume-2932" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:50 +0000 UTC - event for exec-volume-test-dynamicpv-t4jj: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fb1bce60e918c3db2b2219d09496f1037fdebf371bbe121e818613a6f6f1000a" network for pod "exec-volume-test-dynamicpv-t4jj": networkPlugin cni failed to set up pod "exec-volume-test-dynamicpv-t4jj_volume-2932" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:52 +0000 UTC - event for exec-volume-test-dynamicpv-t4jj: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "89b7af5af6df805d1e4bae629705bb8c6af603f1b6f9cbdbd56123b51ac9efb2" network for pod "exec-volume-test-dynamicpv-t4jj": networkPlugin cni failed to set up pod "exec-volume-test-dynamicpv-t4jj_volume-2932" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:54 +0000 UTC - event for exec-volume-test-dynamicpv-t4jj: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "4b989c149847a56fff23c160178aa3ae68002d8aef74eee76cc5fbae0086bf14" network for pod "exec-volume-test-dynamicpv-t4jj": networkPlugin cni failed to set up pod "exec-volume-test-dynamicpv-t4jj_volume-2932" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:52:43.962: INFO: At 2023-01-15 02:47:57 +0000 UTC - event for exec-volume-test-dynamicpv-t4jj: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f67a88b870100da203d2a84b797d72a01ebd77d2ed935aa4e7ec47e39d5b09ac" network for pod "exec-volume-test-dynamicpv-t4jj": networkPlugin cni failed to set up pod "exec-volume-test-dynamicpv-t4jj_volume-2932" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:52:43.962: INFO: At 2023-01-15 02:48:02 +0000 UTC - event for exec-volume-test-dynamicpv-t4jj: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "83699f94f4fcab9861a000037d9043e67ff95eb054aee84632298c8fca10b2e6" network for pod "exec-volume-test-dynamicpv-t4jj": networkPlugin cni failed to set up pod "exec-volume-test-dynamicpv-t4jj_volume-2932" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:52:44.109: INFO: POD NODE PHASE GRACE CONDITIONS Jan 15 02:52:44.109: INFO: Jan 15 02:52:44.257: INFO: Logging node info for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:52:44.405: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-43-202.ap-northeast-1.compute.internal bc0ee9cc-d624-47c0-9b67-04d40afa9f71 50502 0 2023-01-15 02:20:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-43-202.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-43-202.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-011b6dcd6ec6449ff"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:42:39 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-15 02:45:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-011b6dcd6ec6449ff,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078206976 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973349376 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:52:33 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:52:33 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:52:33 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:52:33 +0000 UTC,LastTransitionTime:2023-01-15 02:20:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.43.202,},NodeAddress{Type:ExternalIP,Address:13.231.182.149,},NodeAddress{Type:Hostname,Address:ip-172-20-43-202.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-43-202.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-231-182-149.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec22781cfe518dfd12a3a957bb6e997a,SystemUUID:ec22781c-fe51-8dfd-12a3-a957bb6e997a,BootID:949cecd1-b820-47c2-afb4-e8715b8e9ccd,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:42321438,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-01c93361e2b930fdd kubernetes.io/csi/ebs.csi.aws.com^vol-0286d26f63c64e1e2],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0286d26f63c64e1e2,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-01c93361e2b930fdd,DevicePath:,},},Config:nil,},} Jan 15 02:52:44.406: INFO: Logging kubelet events for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:52:44.556: INFO: Logging pods the kubelet thinks is on node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:52:44.756: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-gx8nq started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:44.757: INFO: pod-subpath-test-preprovisionedpv-7pjv started at 2023-01-15 02:52:37 +0000 UTC (2+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Init container init-volume-preprovisionedpv-7pjv ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Init container test-init-volume-preprovisionedpv-7pjv ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container test-container-subpath-preprovisionedpv-7pjv ready: false, restart count 0 Jan 15 02:52:44.757: INFO: inline-volume-tester-zr29m started at 2023-01-15 02:45:33 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 15 02:52:44.757: INFO: hostexec-ip-172-20-43-202.ap-northeast-1.compute.internal-w8nss started at 2023-01-15 02:50:52 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:52:44.757: INFO: test-new-deployment-5d9fdcc779-hxppt started at 2023-01-15 02:48:18 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container httpd ready: false, restart count 0 Jan 15 02:52:44.757: INFO: ss2-0 started at 2023-01-15 02:45:36 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container webserver ready: true, restart count 0 Jan 15 02:52:44.757: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-5fshl started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:44.757: INFO: csi-mockplugin-0 started at 2023-01-15 02:48:28 +0000 UTC (0+3 container statuses recorded) Jan 15 02:52:44.757: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container mock ready: false, restart count 0 Jan 15 02:52:44.757: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-5qrcs started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:52:44.757: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-94mx6 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:44.757: INFO: cilium-nwjl7 started at 2023-01-15 02:20:28 +0000 UTC (1+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:52:44.757: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:52:44.757: INFO: up-down-1-lm8v7 started at 2023-01-15 02:48:43 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container up-down-1 ready: false, restart count 0 Jan 15 02:52:44.757: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-jpthm started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:44.757: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-clxvc started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:44.757: INFO: csi-mockplugin-attacher-0 started at 2023-01-15 02:48:28 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container csi-attacher ready: true, restart count 0 Jan 15 02:52:44.757: INFO: csi-hostpathplugin-0 started at 2023-01-15 02:51:27 +0000 UTC (0+7 container statuses recorded) Jan 15 02:52:44.757: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container csi-resizer ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container hostpath ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container liveness-probe ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 15 02:52:44.757: INFO: pod-configmaps-b8f7d393-de83-47a9-955b-cd23b047200d started at 2023-01-15 02:48:00 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container agnhost-container ready: false, restart count 0 Jan 15 02:52:44.757: INFO: local-client started at 2023-01-15 02:52:01 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container local-client ready: false, restart count 0 Jan 15 02:52:44.757: INFO: csi-hostpathplugin-0 started at 2023-01-15 02:49:05 +0000 UTC (0+7 container statuses recorded) Jan 15 02:52:44.757: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container csi-resizer ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container hostpath ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container liveness-probe ready: false, restart count 0 Jan 15 02:52:44.757: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 15 02:52:44.757: INFO: ebs-csi-node-bwkxc started at 2023-01-15 02:20:28 +0000 UTC (0+3 container statuses recorded) Jan 15 02:52:44.757: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:52:44.757: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:52:44.757: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:52:44.757: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8sg5j started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:44.757: INFO: pod-ed146c16-d6fe-4a57-9e00-84562904396e started at 2023-01-15 02:51:01 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:52:44.757: INFO: hostexec-ip-172-20-43-202.ap-northeast-1.compute.internal-bzjf4 started at 2023-01-15 02:52:28 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:52:44.757: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-rjqfv started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:44.757: INFO: pod-ready started at 2023-01-15 02:48:36 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container pod-readiness-gate ready: false, restart count 0 Jan 15 02:52:44.757: INFO: inline-volume-tester-tgwrt started at 2023-01-15 02:49:05 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 15 02:52:44.757: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zg7rx started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:44.757: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-kspn8 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:44.757: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:45.396: INFO: Latency metrics for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:52:45.396: INFO: Logging node info for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:52:45.544: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-47-130.ap-northeast-1.compute.internal ea498620-e864-4902-9d30-d8389c23e560 50216 0 2023-01-15 02:18:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-47-130.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-07af92bd9e489a81a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-15 02:18:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-15 02:19:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-15 02:19:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:19:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-15 02:21:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-07af92bd9e489a81a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3918811136 0} {<nil>} 3826964Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3813953536 0} {<nil>} 3724564Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:51:56 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:51:56 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:51:56 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:51:56 +0000 UTC,LastTransitionTime:2023-01-15 02:19:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.47.130,},NodeAddress{Type:ExternalIP,Address:52.193.20.80,},NodeAddress{Type:Hostname,Address:ip-172-20-47-130.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-47-130.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-193-20-80.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec20ce72a358a22f56cbd92c76be5dbf,SystemUUID:ec20ce72-a358-a22f-56cb-d92c76be5dbf,BootID:1bf7fead-d48b-4969-8efe-375fd816430f,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:662209237,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.15 registry.k8s.io/kube-apiserver-amd64:v1.23.15],SizeBytes:135240444,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.15 registry.k8s.io/kube-controller-manager-amd64:v1.23.15],SizeBytes:125046097,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:110016956,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.27.0-alpha.1],SizeBytes:105579814,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.27.0-alpha.1],SizeBytes:102633747,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:57736070,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:55464778,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:54848595,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.15 registry.k8s.io/kube-scheduler-amd64:v1.23.15],SizeBytes:53521430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.27.0-alpha.1],SizeBytes:8787111,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:52:45.544: INFO: Logging kubelet events for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:52:45.694: INFO: Logging pods the kubelet thinks is on node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:52:45.847: INFO: etcd-manager-events-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:45.847: INFO: Container etcd-manager ready: true, restart count 0 Jan 15 02:52:45.847: INFO: etcd-manager-main-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:45.847: INFO: Container etcd-manager ready: true, restart count 0 Jan 15 02:52:45.847: INFO: kube-apiserver-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+2 container statuses recorded) Jan 15 02:52:45.847: INFO: Container healthcheck ready: true, restart count 0 Jan 15 02:52:45.847: INFO: Container kube-apiserver ready: true, restart count 1 Jan 15 02:52:45.847: INFO: kube-scheduler-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:45.847: INFO: Container kube-scheduler ready: true, restart count 0 Jan 15 02:52:45.847: INFO: dns-controller-7db8b74b99-p5n7k started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:45.847: INFO: Container dns-controller ready: true, restart count 0 Jan 15 02:52:45.847: INFO: ebs-csi-controller-6d664d6f54-7h2vq started at 2023-01-15 02:19:21 +0000 UTC (0+5 container statuses recorded) Jan 15 02:52:45.847: INFO: Container csi-attacher ready: true, restart count 0 Jan 15 02:52:45.847: INFO: Container csi-provisioner ready: true, restart count 0 Jan 15 02:52:45.847: INFO: Container csi-resizer ready: true, restart count 0 Jan 15 02:52:45.847: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:52:45.847: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:52:45.847: INFO: kube-controller-manager-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:45.847: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 15 02:52:45.847: INFO: cilium-2scfb started at 2023-01-15 02:19:20 +0000 UTC (1+1 container statuses recorded) Jan 15 02:52:45.847: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:52:45.847: INFO: Container cilium-agent ready: true, restart count 1 Jan 15 02:52:45.847: INFO: ebs-csi-node-5h6bj started at 2023-01-15 02:19:21 +0000 UTC (0+3 container statuses recorded) Jan 15 02:52:45.847: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:52:45.847: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:52:45.847: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:52:45.847: INFO: kops-controller-jfrgz started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:45.847: INFO: Container kops-controller ready: true, restart count 0 Jan 15 02:52:45.847: INFO: cilium-operator-d47487c76-r5k8h started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:45.847: INFO: Container cilium-operator ready: true, restart count 1 Jan 15 02:52:46.312: INFO: Latency metrics for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:52:46.312: INFO: Logging node info for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:52:46.460: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-49-36.ap-northeast-1.compute.internal 26f759b3-6247-49a4-a840-624b157bb011 49041 0 2023-01-15 02:20:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-49-36.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-49-36.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9202":"ip-172-20-49-36.ap-northeast-1.compute.internal","ebs.csi.aws.com":"i-0e51c4a94e7d8d635"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:46:21 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-15 02:46:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0e51c4a94e7d8d635,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078206976 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973349376 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:48:36 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:48:36 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:48:36 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:48:36 +0000 UTC,LastTransitionTime:2023-01-15 02:20:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.49.36,},NodeAddress{Type:ExternalIP,Address:13.230.104.186,},NodeAddress{Type:Hostname,Address:ip-172-20-49-36.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-49-36.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-230-104-186.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec21c80a557af692a945a70b52b34335,SystemUUID:ec21c80a-557a-f692-a945-a70b52b34335,BootID:382fa58a-a2d1-4a1a-90a9-82a29ddf0315,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9202^ca6f8149-947e-11ed-a151-a69f1fbca41f kubernetes.io/csi/ebs.csi.aws.com^vol-01f46e370d57b6be1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9202^ca6f8149-947e-11ed-a151-a69f1fbca41f,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-01f46e370d57b6be1,DevicePath:,},},Config:nil,},} Jan 15 02:52:46.460: INFO: Logging kubelet events for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:52:46.609: INFO: Logging pods the kubelet thinks is on node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:52:46.777: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-gnpk2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:46.777: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zfxfr started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:46.777: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-z5vgr started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:46.777: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-85xj4 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:46.777: INFO: csi-hostpathplugin-0 started at 2023-01-15 02:41:47 +0000 UTC (0+7 container statuses recorded) Jan 15 02:52:46.777: INFO: Container csi-attacher ready: true, restart count 0 Jan 15 02:52:46.777: INFO: Container csi-provisioner ready: true, restart count 0 Jan 15 02:52:46.777: INFO: Container csi-resizer ready: true, restart count 0 Jan 15 02:52:46.777: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 15 02:52:46.777: INFO: Container hostpath ready: true, restart count 0 Jan 15 02:52:46.777: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:52:46.777: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:52:46.777: INFO: pod-secrets-0a2f82ba-a542-4584-a4fc-a5ddc247feb0 started at 2023-01-15 02:47:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container secret-volume-test ready: false, restart count 0 Jan 15 02:52:46.777: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ksvxw started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:46.777: INFO: cilium-fj5wg started at 2023-01-15 02:20:29 +0000 UTC (1+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:52:46.777: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:52:46.777: INFO: pod-projected-secrets-853ed03a-5587-47ea-9281-e95cdd6e4c01 started at 2023-01-15 02:48:01 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container projected-secret-volume-test ready: false, restart count 0 Jan 15 02:52:46.777: INFO: ebs-csi-node-k85ww started at 2023-01-15 02:20:29 +0000 UTC (0+3 container statuses recorded) Jan 15 02:52:46.777: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:52:46.777: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:52:46.777: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:52:46.777: INFO: hostexec-ip-172-20-49-36.ap-northeast-1.compute.internal-rvnjn started at 2023-01-15 02:47:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:52:46.777: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8nm4v started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:52:46.777: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-fnkcp started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:46.777: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-2pcvv started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:46.777: INFO: pod-subpath-test-preprovisionedpv-4gb9 started at 2023-01-15 02:48:06 +0000 UTC (2+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Init container init-volume-preprovisionedpv-4gb9 ready: false, restart count 0 Jan 15 02:52:46.777: INFO: Init container test-init-volume-preprovisionedpv-4gb9 ready: false, restart count 0 Jan 15 02:52:46.777: INFO: Container test-container-subpath-preprovisionedpv-4gb9 ready: false, restart count 0 Jan 15 02:52:46.777: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-lkqt4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:46.777: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-twdj8 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:46.777: INFO: csi-mockplugin-resizer-0 started at 2023-01-15 02:47:37 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container csi-resizer ready: true, restart count 0 Jan 15 02:52:46.777: INFO: inline-volume-tester-x567b started at 2023-01-15 02:46:20 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.777: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 15 02:52:46.777: INFO: csi-mockplugin-0 started at 2023-01-15 02:47:37 +0000 UTC (0+3 container statuses recorded) Jan 15 02:52:46.777: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:52:46.777: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:52:46.777: INFO: Container mock ready: false, restart count 0 Jan 15 02:52:46.777: INFO: up-down-1-l8ptm started at 2023-01-15 02:48:43 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.778: INFO: Container up-down-1 ready: false, restart count 0 Jan 15 02:52:46.778: INFO: ss2-1 started at 2023-01-15 02:47:49 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.778: INFO: Container webserver ready: true, restart count 0 Jan 15 02:52:46.778: INFO: inline-volume-tester2-nw4hj started at 2023-01-15 02:48:27 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:46.778: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 15 02:52:47.336: INFO: Latency metrics for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:52:47.336: INFO: Logging node info for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:52:47.484: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-51-245.ap-northeast-1.compute.internal 29b48a1c-c57d-4410-932e-64e9296beb36 49583 0 2023-01-15 02:20:25 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-51-245.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-51-245.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0c39c90a04ae30573"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-15 02:39:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0c39c90a04ae30573,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:50:13 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:50:13 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:50:13 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:50:13 +0000 UTC,LastTransitionTime:2023-01-15 02:20:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.51.245,},NodeAddress{Type:ExternalIP,Address:18.183.227.37,},NodeAddress{Type:Hostname,Address:ip-172-20-51-245.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-51-245.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-183-227-37.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a7a3540b1bd030cf21ea473814791,SystemUUID:ec2a7a35-40b1-bd03-0cf2-1ea473814791,BootID:adde01e7-fded-49e3-90e6-8f78589ee04d,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:47910609,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:52:47.484: INFO: Logging kubelet events for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:52:47.634: INFO: Logging pods the kubelet thinks is on node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:52:47.791: INFO: cilium-wmrqz started at 2023-01-15 02:20:26 +0000 UTC (1+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:52:47.791: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:52:47.791: INFO: ebs-csi-node-4jd68 started at 2023-01-15 02:20:26 +0000 UTC (0+3 container statuses recorded) Jan 15 02:52:47.791: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:52:47.791: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:52:47.791: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:52:47.791: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-b5gxq started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:47.791: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-w8xjs started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:47.791: INFO: csi-mockplugin-0 started at 2023-01-15 02:52:27 +0000 UTC (0+3 container statuses recorded) Jan 15 02:52:47.791: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:52:47.791: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:52:47.791: INFO: Container mock ready: false, restart count 0 Jan 15 02:52:47.791: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4rvct started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:47.791: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-vjzvp started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:47.791: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zzf5k started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:47.791: INFO: ss2-2 started at 2023-01-15 02:48:20 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container webserver ready: false, restart count 0 Jan 15 02:52:47.791: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-rwqgd started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:47.791: INFO: coredns-autoscaler-557ccb4c66-2dfbk started at 2023-01-15 02:20:46 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container autoscaler ready: true, restart count 0 Jan 15 02:52:47.791: INFO: pod-subpath-test-inlinevolume-tbfn started at 2023-01-15 02:51:39 +0000 UTC (2+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Init container init-volume-inlinevolume-tbfn ready: false, restart count 0 Jan 15 02:52:47.791: INFO: Init container test-init-volume-inlinevolume-tbfn ready: false, restart count 0 Jan 15 02:52:47.791: INFO: Container test-container-subpath-inlinevolume-tbfn ready: false, restart count 0 Jan 15 02:52:47.791: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ncnd2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:47.791: INFO: client-containers-0d74e631-4495-4380-894a-2ae98dd10b07 started at 2023-01-15 02:51:34 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container agnhost-container ready: false, restart count 0 Jan 15 02:52:47.791: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4ngm4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:47.791: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-swpkn started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:47.791: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-wwzg2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:47.791: INFO: csi-mockplugin-attacher-0 started at 2023-01-15 02:52:28 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:52:47.791: INFO: csi-mockplugin-resizer-0 started at 2023-01-15 02:52:28 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container csi-resizer ready: false, restart count 0 Jan 15 02:52:47.791: INFO: coredns-867df8f45c-586xt started at 2023-01-15 02:20:46 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:47.791: INFO: Container coredns ready: true, restart count 0 Jan 15 02:52:48.314: INFO: Latency metrics for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:52:48.314: INFO: Logging node info for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:52:48.462: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-62-124.ap-northeast-1.compute.internal c1941ec4-d9e4-4772-a010-70d33526ed92 50325 0 2023-01-15 02:20:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-62-124.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-62-124.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0d77114c1af529cd3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-15 02:52:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0d77114c1af529cd3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:52:21 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:52:21 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:52:21 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:52:21 +0000 UTC,LastTransitionTime:2023-01-15 02:20:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.62.124,},NodeAddress{Type:ExternalIP,Address:13.115.110.140,},NodeAddress{Type:Hostname,Address:ip-172-20-62-124.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-62-124.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-115-110-140.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2c70e0dbd67c31e6dd0538493d8c3c,SystemUUID:ec2c70e0-dbd6-7c31-e6dd-0538493d8c3c,BootID:981078cd-0f59-4a44-8062-deda147f2986,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:299475478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:56504541,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:52:48.462: INFO: Logging kubelet events for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:52:48.611: INFO: Logging pods the kubelet thinks is on node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:52:48.764: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ftqzz started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.764: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:48.764: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-q75dx started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.764: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:48.764: INFO: up-down-1-hn8km started at 2023-01-15 02:48:43 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.764: INFO: Container up-down-1 ready: true, restart count 0 Jan 15 02:52:48.764: INFO: metadata-volume-a1d890c5-b9f7-4e8c-8891-9a05addd5332 started at 2023-01-15 02:52:38 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.764: INFO: Container client-container ready: false, restart count 0 Jan 15 02:52:48.764: INFO: cilium-j8bh4 started at 2023-01-15 02:20:28 +0000 UTC (1+1 container statuses recorded) Jan 15 02:52:48.764: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:52:48.764: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:52:48.764: INFO: ebs-csi-node-5qjb6 started at 2023-01-15 02:20:28 +0000 UTC (0+3 container statuses recorded) Jan 15 02:52:48.764: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:52:48.764: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:52:48.764: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:52:48.764: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-l4dnx started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.764: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:48.764: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-6bj8q started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.764: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:48.764: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4kzd6 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.764: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:48.764: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8pdcw started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.764: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:48.764: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-hmpns started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.764: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:48.764: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-j92t4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.764: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:48.764: INFO: hostpath-symlink-prep-provisioning-5455 started at 2023-01-15 02:48:25 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.764: INFO: Container init-volume-provisioning-5455 ready: false, restart count 0 Jan 15 02:52:48.764: INFO: coredns-867df8f45c-bk6lp started at 2023-01-15 02:21:13 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.765: INFO: Container coredns ready: true, restart count 0 Jan 15 02:52:48.765: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-m72pw started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.765: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:48.765: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-74jlv started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:52:48.765: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:52:49.259: INFO: Latency metrics for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:52:49.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "volume-2932" for this suite.
Find exec-volume-test-dynamicpv-t4jj mentions in log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\sExternal\sStorage\s\[Driver\:\sebs\.csi\.aws\.com\]\s\[Testpattern\:\sDynamic\sPV\s\(filesystem\svolmode\)\]\svolumeMode\sshould\snot\smount\s\/\smap\sunused\svolumes\sin\sa\spod\s\[LinuxOnly\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352 Jan 15 02:47:33.140: Unexpected error: <*errors.errorString | 0xc0002382b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:385from junit_08.xml
[BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/framework/testsuite.go:51 [BeforeEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 15 02:42:31.072: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename volumemode �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not mount / map unused volumes in a pod [LinuxOnly] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:352 Jan 15 02:42:32.101: INFO: Creating resource for dynamic PV Jan 15 02:42:32.101: INFO: Using claimSize:1Gi, test suite supported size:{ 1Mi}, driver(ebs.csi.aws.com) supported size:{ 1Mi} �[1mSTEP�[0m: creating a StorageClass volumemode-3265-e2e-scmjq7f �[1mSTEP�[0m: creating a claim �[1mSTEP�[0m: Creating pod Jan 15 02:47:33.140: FAIL: Unexpected error: <*errors.errorString | 0xc0002382b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/storage/testsuites.(*volumeModeTestSuite).DefineTests.func7() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:385 +0x62b k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000582d00, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a Jan 15 02:47:33.141: INFO: Deleting pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a" in namespace "volumemode-3265" Jan 15 02:47:33.291: INFO: Wait up to 5m0s for pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a" to be fully deleted �[1mSTEP�[0m: Deleting pvc Jan 15 02:47:41.880: INFO: Deleting PersistentVolumeClaim "ebs.csi.aws.comp8q62" Jan 15 02:47:42.032: INFO: Waiting up to 3m0s for PersistentVolume pvc-a83d444c-1532-48e3-9b26-740e6d834417 to get deleted Jan 15 02:47:42.179: INFO: PersistentVolume pvc-a83d444c-1532-48e3-9b26-740e6d834417 found and phase=Released (146.713141ms) Jan 15 02:47:47.326: INFO: PersistentVolume pvc-a83d444c-1532-48e3-9b26-740e6d834417 was removed �[1mSTEP�[0m: Deleting sc [AfterEach] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "volumemode-3265". �[1mSTEP�[0m: Found 18 events. Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:32 +0000 UTC - event for ebs.csi.aws.comp8q62: {persistentvolume-controller } WaitForFirstConsumer: waiting for first consumer to be created before binding Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:32 +0000 UTC - event for ebs.csi.aws.comp8q62: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:32 +0000 UTC - event for ebs.csi.aws.comp8q62: {ebs.csi.aws.com_ip-172-20-47-130_a755f206-d967-43ea-92fb-e7044f6cbd67 } Provisioning: External provisioner is provisioning volume for claim "volumemode-3265/ebs.csi.aws.comp8q62" Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:36 +0000 UTC - event for ebs.csi.aws.comp8q62: {ebs.csi.aws.com_ip-172-20-47-130_a755f206-d967-43ea-92fb-e7044f6cbd67 } ProvisioningSucceeded: Successfully provisioned volume pvc-a83d444c-1532-48e3-9b26-740e6d834417 Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:36 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {default-scheduler } Scheduled: Successfully assigned volumemode-3265/pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a to ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:38 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f9280c28e0ab451f262ed492a4d42bc7e34aafd3af270676d50dfd6cbd93a8a5" network for pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a": networkPlugin cni failed to set up pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a_volumemode-3265" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:39 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-a83d444c-1532-48e3-9b26-740e6d834417" Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:39 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:41 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "636a75ff28018f8196fea747ffd391d276ca2ca633c9196bc8502c85f1720235" network for pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a": networkPlugin cni failed to set up pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a_volumemode-3265" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:42 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "4a3912693bef42aed93fa43bfd7792cdb9048ac84f834b3f3a0a15a561528327" network for pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a": networkPlugin cni failed to set up pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a_volumemode-3265" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:44 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "cdef0ccb3baabd3b665aaefb8ce6b5b4a66db4d19cda112c14a40ec124388fe7" network for pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a": networkPlugin cni failed to set up pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a_volumemode-3265" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:48 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "ce8c3b4af4f2867b5aec07190bdba5f97dfbd6a40b6c94f79be6e7f8c9fd6112" network for pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a": networkPlugin cni failed to set up pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a_volumemode-3265" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:51 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "15a73d1c8b0228928883e97dbc3317d7e3c13aa3bfee3b6ceaff826d7cc405f7" network for pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a": networkPlugin cni failed to set up pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a_volumemode-3265" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:55 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e7c37408e5d50e1a83e736cfebb4de0349d1ece3fba546dfeab896c679b9140d" network for pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a": networkPlugin cni failed to set up pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a_volumemode-3265" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:56 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7739f2f81bf5e303f43859ff53e7b2b338223672b04dd410184742c1574dbeaa" network for pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a": networkPlugin cni failed to set up pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a_volumemode-3265" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:47:47.623: INFO: At 2023-01-15 02:42:58 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedSync: error determining status: rpc error: code = Unknown desc = Error: No such container: cdef0ccb3baabd3b665aaefb8ce6b5b4a66db4d19cda112c14a40ec124388fe7 Jan 15 02:47:47.623: INFO: At 2023-01-15 02:43:01 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c6df6213a9589532636882ffcf9368f5715ad120b8b74945a06623657dbf4f8d" network for pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a": networkPlugin cni failed to set up pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a_volumemode-3265" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:47:47.623: INFO: At 2023-01-15 02:43:02 +0000 UTC - event for pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5a439ace163474e37f409d424f1ef6846416744ee9e2951a575e040f78c92b63" network for pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a": networkPlugin cni failed to set up pod "pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a_volumemode-3265" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:47:47.770: INFO: POD NODE PHASE GRACE CONDITIONS Jan 15 02:47:47.770: INFO: Jan 15 02:47:47.918: INFO: Logging node info for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:47:48.065: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-43-202.ap-northeast-1.compute.internal bc0ee9cc-d624-47c0-9b67-04d40afa9f71 48282 0 2023-01-15 02:20:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-43-202.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-43-202.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-011b6dcd6ec6449ff"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:42:39 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-15 02:45:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-011b6dcd6ec6449ff,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078206976 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973349376 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:47:26 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:47:26 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:47:26 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:47:26 +0000 UTC,LastTransitionTime:2023-01-15 02:20:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.43.202,},NodeAddress{Type:ExternalIP,Address:13.231.182.149,},NodeAddress{Type:Hostname,Address:ip-172-20-43-202.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-43-202.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-231-182-149.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec22781cfe518dfd12a3a957bb6e997a,SystemUUID:ec22781c-fe51-8dfd-12a3-a957bb6e997a,BootID:949cecd1-b820-47c2-afb4-e8715b8e9ccd,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:42321438,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-01c93361e2b930fdd kubernetes.io/csi/ebs.csi.aws.com^vol-0e64399c317c780b9],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-01c93361e2b930fdd,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0e64399c317c780b9,DevicePath:,},},Config:nil,},} Jan 15 02:47:48.066: INFO: Logging kubelet events for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:47:48.215: INFO: Logging pods the kubelet thinks is on node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:47:48.374: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-kspn8 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:48.374: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zg7rx started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:48.374: INFO: pod-host-path-test started at 2023-01-15 02:45:18 +0000 UTC (0+2 container statuses recorded) Jan 15 02:47:48.374: INFO: Container test-container-1 ready: false, restart count 0 Jan 15 02:47:48.374: INFO: Container test-container-2 ready: false, restart count 0 Jan 15 02:47:48.374: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-gx8nq started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:48.374: INFO: service-proxy-disabled-w9tn2 started at 2023-01-15 02:43:29 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container service-proxy-disabled ready: false, restart count 0 Jan 15 02:47:48.374: INFO: inline-volume-tester-zr29m started at 2023-01-15 02:45:33 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 15 02:47:48.374: INFO: liveness-9c1b48db-4273-4fb7-9525-dcd43fbca8f3 started at 2023-01-15 02:45:49 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container agnhost-container ready: false, restart count 0 Jan 15 02:47:48.374: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-94mx6 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:48.374: INFO: cilium-nwjl7 started at 2023-01-15 02:20:28 +0000 UTC (1+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:47:48.374: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:47:48.374: INFO: ss2-0 started at 2023-01-15 02:45:36 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container webserver ready: false, restart count 0 Jan 15 02:47:48.374: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-5fshl started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:48.374: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-5qrcs started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:47:48.374: INFO: pod-projected-secrets-f7ccf2fc-1067-4c1e-980b-5331e6748ab5 started at 2023-01-15 02:45:32 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container projected-secret-volume-test ready: false, restart count 0 Jan 15 02:47:48.374: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-jpthm started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:48.374: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-clxvc started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:48.374: INFO: security-context-8e259318-41aa-407b-9e07-f2cbce91e446 started at 2023-01-15 02:47:34 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container test-container ready: false, restart count 0 Jan 15 02:47:48.374: INFO: pod-subpath-test-inlinevolume-mv6s started at 2023-01-15 02:42:42 +0000 UTC (1+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Init container init-volume-inlinevolume-mv6s ready: false, restart count 0 Jan 15 02:47:48.374: INFO: Container test-container-subpath-inlinevolume-mv6s ready: false, restart count 0 Jan 15 02:47:48.374: INFO: csi-mockplugin-0 started at 2023-01-15 02:46:30 +0000 UTC (0+4 container statuses recorded) Jan 15 02:47:48.374: INFO: Container busybox ready: false, restart count 0 Jan 15 02:47:48.374: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:47:48.374: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:47:48.374: INFO: Container mock ready: false, restart count 0 Jan 15 02:47:48.374: INFO: ebs-csi-node-bwkxc started at 2023-01-15 02:20:28 +0000 UTC (0+3 container statuses recorded) Jan 15 02:47:48.374: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:47:48.374: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:47:48.374: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:47:48.374: INFO: downwardapi-volume-8c6b4ef7-a68d-4461-a345-ee9522f05d5f started at 2023-01-15 02:46:02 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container client-container ready: false, restart count 0 Jan 15 02:47:48.374: INFO: netserver-0 started at 2023-01-15 02:40:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container webserver ready: true, restart count 0 Jan 15 02:47:48.374: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-rjqfv started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:48.374: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8sg5j started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:48.374: INFO: pod-secrets-a94137a6-dc2b-4086-a3eb-f2cff844958f started at 2023-01-15 02:45:23 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container secret-volume-test ready: false, restart count 0 Jan 15 02:47:48.374: INFO: netserver-0 started at 2023-01-15 02:46:10 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container webserver ready: true, restart count 0 Jan 15 02:47:48.374: INFO: exec-volume-test-dynamicpv-t4jj started at 2023-01-15 02:47:24 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:48.374: INFO: Container exec-container-dynamicpv-t4jj ready: false, restart count 0 Jan 15 02:47:49.023: INFO: Latency metrics for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:47:49.023: INFO: Logging node info for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:47:49.171: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-47-130.ap-northeast-1.compute.internal ea498620-e864-4902-9d30-d8389c23e560 47922 0 2023-01-15 02:18:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-47-130.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-07af92bd9e489a81a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-15 02:18:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-15 02:19:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-15 02:19:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:19:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-15 02:21:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-07af92bd9e489a81a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3918811136 0} {<nil>} 3826964Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3813953536 0} {<nil>} 3724564Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:46:50 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:46:50 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:46:50 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:46:50 +0000 UTC,LastTransitionTime:2023-01-15 02:19:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.47.130,},NodeAddress{Type:ExternalIP,Address:52.193.20.80,},NodeAddress{Type:Hostname,Address:ip-172-20-47-130.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-47-130.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-193-20-80.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec20ce72a358a22f56cbd92c76be5dbf,SystemUUID:ec20ce72-a358-a22f-56cb-d92c76be5dbf,BootID:1bf7fead-d48b-4969-8efe-375fd816430f,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:662209237,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.15 registry.k8s.io/kube-apiserver-amd64:v1.23.15],SizeBytes:135240444,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.15 registry.k8s.io/kube-controller-manager-amd64:v1.23.15],SizeBytes:125046097,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:110016956,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.27.0-alpha.1],SizeBytes:105579814,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.27.0-alpha.1],SizeBytes:102633747,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:57736070,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:55464778,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:54848595,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.15 registry.k8s.io/kube-scheduler-amd64:v1.23.15],SizeBytes:53521430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.27.0-alpha.1],SizeBytes:8787111,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:47:49.172: INFO: Logging kubelet events for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:47:49.323: INFO: Logging pods the kubelet thinks is on node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:47:49.478: INFO: kube-controller-manager-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:49.478: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 15 02:47:49.478: INFO: cilium-2scfb started at 2023-01-15 02:19:20 +0000 UTC (1+1 container statuses recorded) Jan 15 02:47:49.478: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:47:49.478: INFO: Container cilium-agent ready: true, restart count 1 Jan 15 02:47:49.478: INFO: ebs-csi-node-5h6bj started at 2023-01-15 02:19:21 +0000 UTC (0+3 container statuses recorded) Jan 15 02:47:49.478: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:47:49.478: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:47:49.478: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:47:49.478: INFO: kops-controller-jfrgz started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:49.478: INFO: Container kops-controller ready: true, restart count 0 Jan 15 02:47:49.478: INFO: cilium-operator-d47487c76-r5k8h started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:49.478: INFO: Container cilium-operator ready: true, restart count 1 Jan 15 02:47:49.478: INFO: etcd-manager-events-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:49.478: INFO: Container etcd-manager ready: true, restart count 0 Jan 15 02:47:49.478: INFO: etcd-manager-main-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:49.478: INFO: Container etcd-manager ready: true, restart count 0 Jan 15 02:47:49.478: INFO: kube-apiserver-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+2 container statuses recorded) Jan 15 02:47:49.478: INFO: Container healthcheck ready: true, restart count 0 Jan 15 02:47:49.478: INFO: Container kube-apiserver ready: true, restart count 1 Jan 15 02:47:49.478: INFO: kube-scheduler-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:49.478: INFO: Container kube-scheduler ready: true, restart count 0 Jan 15 02:47:49.478: INFO: dns-controller-7db8b74b99-p5n7k started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:49.478: INFO: Container dns-controller ready: true, restart count 0 Jan 15 02:47:49.478: INFO: ebs-csi-controller-6d664d6f54-7h2vq started at 2023-01-15 02:19:21 +0000 UTC (0+5 container statuses recorded) Jan 15 02:47:49.478: INFO: Container csi-attacher ready: true, restart count 0 Jan 15 02:47:49.478: INFO: Container csi-provisioner ready: true, restart count 0 Jan 15 02:47:49.478: INFO: Container csi-resizer ready: true, restart count 0 Jan 15 02:47:49.478: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:47:49.478: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:47:49.939: INFO: Latency metrics for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:47:49.939: INFO: Logging node info for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:47:50.087: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-49-36.ap-northeast-1.compute.internal 26f759b3-6247-49a4-a840-624b157bb011 47752 0 2023-01-15 02:20:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-49-36.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-49-36.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9202":"ip-172-20-49-36.ap-northeast-1.compute.internal","ebs.csi.aws.com":"i-0e51c4a94e7d8d635"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:46:21 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-15 02:46:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0e51c4a94e7d8d635,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078206976 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973349376 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:46:23 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:46:23 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:46:23 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:46:23 +0000 UTC,LastTransitionTime:2023-01-15 02:20:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.49.36,},NodeAddress{Type:ExternalIP,Address:13.230.104.186,},NodeAddress{Type:Hostname,Address:ip-172-20-49-36.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-49-36.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-230-104-186.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec21c80a557af692a945a70b52b34335,SystemUUID:ec21c80a-557a-f692-a945-a70b52b34335,BootID:382fa58a-a2d1-4a1a-90a9-82a29ddf0315,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9202^ca6f8149-947e-11ed-a151-a69f1fbca41f],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9202^ca6f8149-947e-11ed-a151-a69f1fbca41f,DevicePath:,},},Config:nil,},} Jan 15 02:47:50.087: INFO: Logging kubelet events for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:47:50.236: INFO: Logging pods the kubelet thinks is on node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:47:50.399: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-lkqt4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:50.400: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-twdj8 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:50.400: INFO: netserver-1 started at 2023-01-15 02:40:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container webserver ready: false, restart count 0 Jan 15 02:47:50.400: INFO: inline-volume-tester-x567b started at 2023-01-15 02:46:20 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 15 02:47:50.400: INFO: csi-mockplugin-0 started at 2023-01-15 02:47:37 +0000 UTC (0+3 container statuses recorded) Jan 15 02:47:50.400: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:47:50.400: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:47:50.400: INFO: Container mock ready: false, restart count 0 Jan 15 02:47:50.400: INFO: csi-mockplugin-resizer-0 started at 2023-01-15 02:47:37 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container csi-resizer ready: false, restart count 0 Jan 15 02:47:50.400: INFO: ss2-1 started at 2023-01-15 02:47:49 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container webserver ready: false, restart count 0 Jan 15 02:47:50.400: INFO: netserver-1 started at 2023-01-15 02:46:10 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container webserver ready: false, restart count 0 Jan 15 02:47:50.400: INFO: csi-hostpathplugin-0 started at 2023-01-15 02:41:47 +0000 UTC (0+7 container statuses recorded) Jan 15 02:47:50.400: INFO: Container csi-attacher ready: true, restart count 0 Jan 15 02:47:50.400: INFO: Container csi-provisioner ready: true, restart count 0 Jan 15 02:47:50.400: INFO: Container csi-resizer ready: true, restart count 0 Jan 15 02:47:50.400: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 15 02:47:50.400: INFO: Container hostpath ready: true, restart count 0 Jan 15 02:47:50.400: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:47:50.400: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:47:50.400: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-gnpk2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:50.400: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zfxfr started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:50.400: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-z5vgr started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:50.400: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-85xj4 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:50.400: INFO: cilium-fj5wg started at 2023-01-15 02:20:29 +0000 UTC (1+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:47:50.400: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:47:50.400: INFO: local-injector started at 2023-01-15 02:45:36 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container local-injector ready: false, restart count 0 Jan 15 02:47:50.400: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ksvxw started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:50.400: INFO: ephemeral-containers-target-pod started at 2023-01-15 02:46:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container test-container-1 ready: true, restart count 0 Jan 15 02:47:50.400: INFO: ss-0 started at 2023-01-15 02:47:49 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container webserver ready: false, restart count 0 Jan 15 02:47:50.400: INFO: ebs-csi-node-k85ww started at 2023-01-15 02:20:29 +0000 UTC (0+3 container statuses recorded) Jan 15 02:47:50.400: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:47:50.400: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:47:50.400: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:47:50.400: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8nm4v started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:47:50.400: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-fnkcp started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:50.400: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-2pcvv started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:50.400: INFO: hostexec-ip-172-20-49-36.ap-northeast-1.compute.internal-56fhf started at 2023-01-15 02:45:27 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:50.400: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:47:50.913: INFO: Latency metrics for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:47:50.913: INFO: Logging node info for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:47:51.060: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-51-245.ap-northeast-1.compute.internal 29b48a1c-c57d-4410-932e-64e9296beb36 47028 0 2023-01-15 02:20:25 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-51-245.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-51-245.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0c39c90a04ae30573"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-15 02:39:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0c39c90a04ae30573,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:06 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:06 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:06 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:45:06 +0000 UTC,LastTransitionTime:2023-01-15 02:20:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.51.245,},NodeAddress{Type:ExternalIP,Address:18.183.227.37,},NodeAddress{Type:Hostname,Address:ip-172-20-51-245.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-51-245.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-183-227-37.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a7a3540b1bd030cf21ea473814791,SystemUUID:ec2a7a35-40b1-bd03-0cf2-1ea473814791,BootID:adde01e7-fded-49e3-90e6-8f78589ee04d,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:47910609,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:47:51.060: INFO: Logging kubelet events for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:47:51.210: INFO: Logging pods the kubelet thinks is on node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:47:51.370: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-rwqgd started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:51.370: INFO: coredns-autoscaler-557ccb4c66-2dfbk started at 2023-01-15 02:20:46 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container autoscaler ready: true, restart count 0 Jan 15 02:47:51.370: INFO: pod-1df18e3d-83da-4374-9fdb-7c83a860b73f started at 2023-01-15 02:43:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:47:51.370: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ncnd2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:51.370: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-wwzg2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:51.370: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4ngm4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:51.370: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-swpkn started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:51.370: INFO: coredns-867df8f45c-586xt started at 2023-01-15 02:20:46 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container coredns ready: true, restart count 0 Jan 15 02:47:51.370: INFO: service-proxy-disabled-qkqj4 started at 2023-01-15 02:43:29 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container service-proxy-disabled ready: false, restart count 0 Jan 15 02:47:51.370: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-b5gxq started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:51.370: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-w8xjs started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:51.370: INFO: cilium-wmrqz started at 2023-01-15 02:20:26 +0000 UTC (1+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:47:51.370: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:47:51.370: INFO: ebs-csi-node-4jd68 started at 2023-01-15 02:20:26 +0000 UTC (0+3 container statuses recorded) Jan 15 02:47:51.370: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:47:51.370: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:47:51.370: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:47:51.370: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zzf5k started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:51.370: INFO: hostexec-ip-172-20-51-245.ap-northeast-1.compute.internal-hv7d9 started at 2023-01-15 02:43:25 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:47:51.370: INFO: netserver-2 started at 2023-01-15 02:46:10 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container webserver ready: false, restart count 0 Jan 15 02:47:51.370: INFO: hostexec-ip-172-20-51-245.ap-northeast-1.compute.internal-m447m started at 2023-01-15 02:46:19 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:47:51.370: INFO: netserver-2 started at 2023-01-15 02:40:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container webserver ready: true, restart count 0 Jan 15 02:47:51.370: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4rvct started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:51.370: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-vjzvp started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:47:51.370: INFO: pod-83074687-e233-4ffd-9473-07d6be97d619 started at 2023-01-15 02:46:24 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:51.370: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:47:52.073: INFO: Latency metrics for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:47:52.073: INFO: Logging node info for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:47:52.221: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-62-124.ap-northeast-1.compute.internal c1941ec4-d9e4-4772-a010-70d33526ed92 47127 0 2023-01-15 02:20:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-62-124.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-62-124.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0d77114c1af529cd3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-15 02:40:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0d77114c1af529cd3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:22 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:22 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:22 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:45:22 +0000 UTC,LastTransitionTime:2023-01-15 02:20:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.62.124,},NodeAddress{Type:ExternalIP,Address:13.115.110.140,},NodeAddress{Type:Hostname,Address:ip-172-20-62-124.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-62-124.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-115-110-140.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2c70e0dbd67c31e6dd0538493d8c3c,SystemUUID:ec2c70e0-dbd6-7c31-e6dd-0538493d8c3c,BootID:981078cd-0f59-4a44-8062-deda147f2986,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:299475478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:56504541,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:47:52.221: INFO: Logging kubelet events for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:47:52.370: INFO: Logging pods the kubelet thinks is on node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:47:52.562: INFO: coredns-867df8f45c-bk6lp started at 2023-01-15 02:21:13 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.562: INFO: Container coredns ready: true, restart count 0 Jan 15 02:47:52.562: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-m72pw started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.562: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:52.562: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-74jlv started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.562: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:52.562: INFO: netserver-3 started at 2023-01-15 02:40:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.562: INFO: Container webserver ready: true, restart count 0 Jan 15 02:47:52.562: INFO: netserver-3 started at 2023-01-15 02:46:10 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.562: INFO: Container webserver ready: false, restart count 0 Jan 15 02:47:52.562: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ftqzz started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.562: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:52.562: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-q75dx started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.562: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:52.562: INFO: cilium-j8bh4 started at 2023-01-15 02:20:28 +0000 UTC (1+1 container statuses recorded) Jan 15 02:47:52.562: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:47:52.562: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:47:52.562: INFO: ebs-csi-node-5qjb6 started at 2023-01-15 02:20:28 +0000 UTC (0+3 container statuses recorded) Jan 15 02:47:52.562: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:47:52.562: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:47:52.562: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:47:52.562: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-l4dnx started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.563: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:52.563: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-6bj8q started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.563: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:47:52.563: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4kzd6 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.563: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:52.563: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8pdcw started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.563: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:52.563: INFO: csi-mockplugin-0 started at 2023-01-15 02:42:38 +0000 UTC (0+3 container statuses recorded) Jan 15 02:47:52.563: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:47:52.563: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:47:52.563: INFO: Container mock ready: false, restart count 0 Jan 15 02:47:52.563: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-hmpns started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.563: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:52.563: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-j92t4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.563: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:47:52.563: INFO: service-proxy-disabled-vlcxq started at 2023-01-15 02:43:29 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.563: INFO: Container service-proxy-disabled ready: false, restart count 0 Jan 15 02:47:52.563: INFO: csi-mockplugin-attacher-0 started at 2023-01-15 02:42:38 +0000 UTC (0+1 container statuses recorded) Jan 15 02:47:52.563: INFO: Container csi-attacher ready: true, restart count 0 Jan 15 02:47:53.065: INFO: Latency metrics for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:47:53.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "volumemode-3265" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDeployment\sdeployment\sreaping\sshould\scascade\sto\sits\sreplica\ssets\sand\spods$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:95 Jan 15 02:53:19.324: Unexpected error: <*errors.errorString | 0xc00526b890>: { s: "error waiting for deployment \"test-new-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"test-new-deployment-5d9fdcc779\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}", } error waiting for deployment "test-new-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:720from junit_14.xml
[BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 15 02:48:17.354: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment reaping should cascade to its replica sets and pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:95 Jan 15 02:48:18.414: INFO: Creating simple deployment test-new-deployment Jan 15 02:48:19.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:21.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:23.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:25.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:27.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:29.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:31.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:33.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:35.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:37.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:39.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:41.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:43.181: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:45.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:47.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:49.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:51.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:53.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:55.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:57.195: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:48:59.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:01.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:03.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:05.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:07.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:09.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:11.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:13.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:15.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:17.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:19.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:21.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:23.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:25.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:27.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:29.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:31.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:33.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:35.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:37.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:39.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:41.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:43.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:45.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:47.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:49.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:51.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:53.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:55.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:57.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:49:59.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:01.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:03.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:05.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:07.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:09.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:11.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:13.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:15.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:17.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:19.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:21.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:23.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:25.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:27.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:29.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:31.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:33.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:35.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:37.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:39.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:41.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:43.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:45.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:47.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:49.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:51.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:53.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:55.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:57.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:50:59.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:01.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:03.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:05.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:07.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:09.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:11.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:13.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:15.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:17.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:19.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:21.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:23.177: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:25.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:27.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:29.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:31.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:33.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:35.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:37.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:39.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:41.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:43.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:45.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:47.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:49.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:51.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:53.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:55.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:57.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:51:59.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:01.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:03.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:05.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:07.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:09.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:11.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:13.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:15.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:17.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:19.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:21.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:23.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:25.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:27.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:29.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:31.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:33.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:35.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:37.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:39.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:41.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:43.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:45.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:47.174: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:49.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:51.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:53.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:55.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:57.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:52:59.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:53:01.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:53:03.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:53:05.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:53:07.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:53:09.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:53:11.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:53:13.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:53:15.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:53:17.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:53:19.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:53:19.324: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:53:19.324: FAIL: Unexpected error: <*errors.errorString | 0xc00526b890>: { s: "error waiting for deployment \"test-new-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"test-new-deployment-5d9fdcc779\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}", } error waiting for deployment "test-new-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 48, 18, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.testDeleteDeployment(0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:720 +0x5c5 k8s.io/kubernetes/test/e2e/apps.glob..func4.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:96 +0x1d k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008af040, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 15 02:53:19.477: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-5964 9eeb2a2f-147a-4672-a25f-f01635966605 48815 1 2023-01-15 02:48:18 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:1 kubectl.kubernetes.io/last-applied-configuration:should-not-copy-to-replica-set test:should-copy-to-replica-set] [] [] [{e2e.test Update apps/v1 2023-01-15 02:48:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:test":{}},"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-15 02:48:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00479ad38 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-15 02:48:18 +0000 UTC,LastTransitionTime:2023-01-15 02:48:18 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-new-deployment-5d9fdcc779" is progressing.,LastUpdateTime:2023-01-15 02:48:18 +0000 UTC,LastTransitionTime:2023-01-15 02:48:18 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 15 02:53:19.630: INFO: New ReplicaSet "test-new-deployment-5d9fdcc779" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-5d9fdcc779 deployment-5964 473a7266-1b8e-4b1c-b988-2dbb301f4d20 48813 1 2023-01-15 02:48:18 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1 test:should-copy-to-replica-set] [{apps/v1 Deployment test-new-deployment 9eeb2a2f-147a-4672-a25f-f01635966605 0xc00479b137 0xc00479b138}] [] [{kube-controller-manager Update apps/v1 2023-01-15 02:48:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{},"f:test":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9eeb2a2f-147a-4672-a25f-f01635966605\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-15 02:48:18 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00479b1c8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 15 02:53:19.781: INFO: Pod "test-new-deployment-5d9fdcc779-hxppt" is not available: &Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-hxppt test-new-deployment-5d9fdcc779- deployment-5964 0bf2f4d7-46ab-48ae-9ede-b9cfa13d58f1 48814 0 2023-01-15 02:48:18 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 473a7266-1b8e-4b1c-b988-2dbb301f4d20 0xc00479b557 0xc00479b558}] [] [{kube-controller-manager Update v1 2023-01-15 02:48:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"473a7266-1b8e-4b1c-b988-2dbb301f4d20\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-15 02:48:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6ttd9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6ttd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-43-202.ap-northeast-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-15 02:48:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-15 02:48:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-15 02:48:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-15 02:48:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.43.202,PodIP:,StartTime:2023-01-15 02:48:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "deployment-5964". �[1mSTEP�[0m: Found 14 events. Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:18 +0000 UTC - event for test-new-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-new-deployment-5d9fdcc779 to 1 Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:18 +0000 UTC - event for test-new-deployment-5d9fdcc779: {replicaset-controller } SuccessfulCreate: Created pod: test-new-deployment-5d9fdcc779-hxppt Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:18 +0000 UTC - event for test-new-deployment-5d9fdcc779-hxppt: {default-scheduler } Scheduled: Successfully assigned deployment-5964/test-new-deployment-5d9fdcc779-hxppt to ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:20 +0000 UTC - event for test-new-deployment-5d9fdcc779-hxppt: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "779b2c817b7307a8cf887b22172bfe952fb10a39f270ed2e4e96f3c4f7d26713" network for pod "test-new-deployment-5d9fdcc779-hxppt": networkPlugin cni failed to set up pod "test-new-deployment-5d9fdcc779-hxppt_deployment-5964" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:22 +0000 UTC - event for test-new-deployment-5d9fdcc779-hxppt: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:23 +0000 UTC - event for test-new-deployment-5d9fdcc779-hxppt: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8559c7533bab011eb6a8e60cb43fed6b37901aabe164d496085839326921e9ca" network for pod "test-new-deployment-5d9fdcc779-hxppt": networkPlugin cni failed to set up pod "test-new-deployment-5d9fdcc779-hxppt_deployment-5964" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:25 +0000 UTC - event for test-new-deployment-5d9fdcc779-hxppt: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f434f1001acd219ec4bad19e0b42eed25134b39c694761abff687813d05e457c" network for pod "test-new-deployment-5d9fdcc779-hxppt": networkPlugin cni failed to set up pod "test-new-deployment-5d9fdcc779-hxppt_deployment-5964" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:28 +0000 UTC - event for test-new-deployment-5d9fdcc779-hxppt: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "824ad2421fa3b64abe2ad6dfa2aabb64268dcb785e06d86ad02f4c09f9bdb9bb" network for pod "test-new-deployment-5d9fdcc779-hxppt": networkPlugin cni failed to set up pod "test-new-deployment-5d9fdcc779-hxppt_deployment-5964" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:32 +0000 UTC - event for test-new-deployment-5d9fdcc779-hxppt: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5bb580d47423f73237a0c7059f3b3bbe7f0c3d2f1d18c8c194739be152a3b251" network for pod "test-new-deployment-5d9fdcc779-hxppt": networkPlugin cni failed to set up pod "test-new-deployment-5d9fdcc779-hxppt_deployment-5964" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:35 +0000 UTC - event for test-new-deployment-5d9fdcc779-hxppt: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b651041ee3f0f5d4393364a3dd7d549c585f7f1541677aa4032a6a16c349ea8c" network for pod "test-new-deployment-5d9fdcc779-hxppt": networkPlugin cni failed to set up pod "test-new-deployment-5d9fdcc779-hxppt_deployment-5964" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:41 +0000 UTC - event for test-new-deployment-5d9fdcc779-hxppt: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "81efc847a8ef124160af099fa03c81532530f57a002dfe0a235608350653cf77" network for pod "test-new-deployment-5d9fdcc779-hxppt": networkPlugin cni failed to set up pod "test-new-deployment-5d9fdcc779-hxppt_deployment-5964" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:48 +0000 UTC - event for test-new-deployment-5d9fdcc779-hxppt: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b89570cfb7030edd5fadd1933e124adb3750dca096648a4e4906504d7356fc0a" network for pod "test-new-deployment-5d9fdcc779-hxppt": networkPlugin cni failed to set up pod "test-new-deployment-5d9fdcc779-hxppt_deployment-5964" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:52 +0000 UTC - event for test-new-deployment-5d9fdcc779-hxppt: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d1e8e45f18259e43567a4c053bd2e0521d0a2ea108d94e068435fdc0c2f9c40a" network for pod "test-new-deployment-5d9fdcc779-hxppt": networkPlugin cni failed to set up pod "test-new-deployment-5d9fdcc779-hxppt_deployment-5964" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:53:19.935: INFO: At 2023-01-15 02:48:55 +0000 UTC - event for test-new-deployment-5d9fdcc779-hxppt: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f979a384f8d041fa6aa19444c9ad03fdec19dae8a2970f89ad89b95b80f3393f" network for pod "test-new-deployment-5d9fdcc779-hxppt": networkPlugin cni failed to set up pod "test-new-deployment-5d9fdcc779-hxppt_deployment-5964" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:53:20.086: INFO: POD NODE PHASE GRACE CONDITIONS Jan 15 02:53:20.087: INFO: test-new-deployment-5d9fdcc779-hxppt ip-172-20-43-202.ap-northeast-1.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:48:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:48:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:48:18 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:48:18 +0000 UTC }] Jan 15 02:53:20.087: INFO: Jan 15 02:53:20.246: INFO: Unable to fetch deployment-5964/test-new-deployment-5d9fdcc779-hxppt/httpd logs: the server rejected our request for an unknown reason (get pods test-new-deployment-5d9fdcc779-hxppt) Jan 15 02:53:20.398: INFO: Logging node info for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:53:20.550: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-43-202.ap-northeast-1.compute.internal bc0ee9cc-d624-47c0-9b67-04d40afa9f71 51404 0 2023-01-15 02:20:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-43-202.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-43-202.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-011b6dcd6ec6449ff"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-15 02:42:39 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-15 02:45:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-011b6dcd6ec6449ff,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078206976 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973349376 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:52:33 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:52:33 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:52:33 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:52:33 +0000 UTC,LastTransitionTime:2023-01-15 02:20:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.43.202,},NodeAddress{Type:ExternalIP,Address:13.231.182.149,},NodeAddress{Type:Hostname,Address:ip-172-20-43-202.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-43-202.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-231-182-149.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec22781cfe518dfd12a3a957bb6e997a,SystemUUID:ec22781c-fe51-8dfd-12a3-a957bb6e997a,BootID:949cecd1-b820-47c2-afb4-e8715b8e9ccd,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:42321438,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-01c93361e2b930fdd kubernetes.io/csi/ebs.csi.aws.com^vol-0286d26f63c64e1e2],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0286d26f63c64e1e2,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-01c93361e2b930fdd,DevicePath:,},},Config:nil,},} Jan 15 02:53:20.550: INFO: Logging kubelet events for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:53:20.704: INFO: Logging pods the kubelet thinks is on node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:53:20.882: INFO: inline-volume-tester-zr29m started at 2023-01-15 02:45:33 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container csi-volume-tester ready: true, restart count 0 Jan 15 02:53:20.882: INFO: simpletest-rc-to-be-deleted-dcwvw started at 2023-01-15 02:52:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:20.882: INFO: hostexec-ip-172-20-43-202.ap-northeast-1.compute.internal-w8nss started at 2023-01-15 02:50:52 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:53:20.882: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-gx8nq started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:20.882: INFO: pod-subpath-test-preprovisionedpv-7pjv started at 2023-01-15 02:52:37 +0000 UTC (2+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Init container init-volume-preprovisionedpv-7pjv ready: false, restart count 0 Jan 15 02:53:20.882: INFO: Init container test-init-volume-preprovisionedpv-7pjv ready: false, restart count 0 Jan 15 02:53:20.882: INFO: Container test-container-subpath-preprovisionedpv-7pjv ready: false, restart count 0 Jan 15 02:53:20.882: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-5qrcs started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:53:20.882: INFO: simpletest-rc-to-be-deleted-9fflm started at 2023-01-15 02:52:55 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:20.882: INFO: cilium-nwjl7 started at 2023-01-15 02:20:28 +0000 UTC (1+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:53:20.882: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:53:20.882: INFO: test-new-deployment-5d9fdcc779-hxppt started at 2023-01-15 02:48:18 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container httpd ready: false, restart count 0 Jan 15 02:53:20.882: INFO: ss2-0 started at 2023-01-15 02:45:36 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container webserver ready: true, restart count 0 Jan 15 02:53:20.882: INFO: hostexec-ip-172-20-43-202.ap-northeast-1.compute.internal-qw7wq started at 2023-01-15 02:52:50 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:53:20.882: INFO: up-down-1-lm8v7 started at 2023-01-15 02:48:43 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container up-down-1 ready: false, restart count 0 Jan 15 02:53:20.882: INFO: simpletest-rc-to-be-deleted-8bnhk started at 2023-01-15 02:52:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:20.882: INFO: simpletest-rc-to-be-deleted-cllvg started at 2023-01-15 02:52:59 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:20.882: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-jpthm started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:20.882: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-clxvc started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:20.882: INFO: ebs-csi-node-bwkxc started at 2023-01-15 02:20:28 +0000 UTC (0+3 container statuses recorded) Jan 15 02:53:20.882: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:53:20.882: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:53:20.882: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:53:20.882: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-rjqfv started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:20.882: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8sg5j started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:20.882: INFO: pod-ed146c16-d6fe-4a57-9e00-84562904396e started at 2023-01-15 02:51:01 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:53:20.882: INFO: hostexec-ip-172-20-43-202.ap-northeast-1.compute.internal-bzjf4 started at 2023-01-15 02:52:28 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:53:20.882: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-kspn8 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:20.882: INFO: inline-volume-tester-tgwrt started at 2023-01-15 02:49:05 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 15 02:53:20.882: INFO: simpletest-rc-to-be-deleted-fvxll started at 2023-01-15 02:52:55 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:20.882: INFO: simpletest-rc-to-be-deleted-54m4r started at 2023-01-15 02:52:55 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:20.882: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-5fshl started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.882: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:20.882: INFO: csi-mockplugin-0 started at 2023-01-15 02:48:28 +0000 UTC (0+3 container statuses recorded) Jan 15 02:53:20.882: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:53:20.882: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container mock ready: false, restart count 0 Jan 15 02:53:20.883: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-94mx6 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.883: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:20.883: INFO: simpletest-rc-to-be-deleted-f2bgx started at 2023-01-15 02:52:55 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.883: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:20.883: INFO: csi-mockplugin-attacher-0 started at 2023-01-15 02:48:28 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.883: INFO: Container csi-attacher ready: true, restart count 0 Jan 15 02:53:20.883: INFO: simpletest-rc-to-be-deleted-5rv6l started at 2023-01-15 02:52:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.883: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:20.883: INFO: csi-hostpathplugin-0 started at 2023-01-15 02:51:27 +0000 UTC (0+7 container statuses recorded) Jan 15 02:53:20.883: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container csi-resizer ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container hostpath ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container liveness-probe ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 15 02:53:20.883: INFO: simpletest-rc-to-be-deleted-d8fdx started at 2023-01-15 02:52:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.883: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:20.883: INFO: pod-subpath-test-preprovisionedpv-5kn4 started at 2023-01-15 02:53:07 +0000 UTC (2+1 container statuses recorded) Jan 15 02:53:20.883: INFO: Init container init-volume-preprovisionedpv-5kn4 ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Init container test-init-volume-preprovisionedpv-5kn4 ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container test-container-subpath-preprovisionedpv-5kn4 ready: false, restart count 0 Jan 15 02:53:20.883: INFO: local-client started at 2023-01-15 02:52:01 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.883: INFO: Container local-client ready: false, restart count 0 Jan 15 02:53:20.883: INFO: csi-hostpathplugin-0 started at 2023-01-15 02:49:05 +0000 UTC (0+7 container statuses recorded) Jan 15 02:53:20.883: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container csi-resizer ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container hostpath ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container liveness-probe ready: false, restart count 0 Jan 15 02:53:20.883: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 15 02:53:20.883: INFO: simpletest-rc-to-be-deleted-9vbjq started at 2023-01-15 02:52:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.883: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:20.883: INFO: simpletest-rc-to-be-deleted-8wnc6 started at 2023-01-15 02:52:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.883: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:20.883: INFO: simpletest-rc-to-be-deleted-ck5g5 started at 2023-01-15 02:52:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.883: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:20.883: INFO: pod-ready started at 2023-01-15 02:48:36 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.883: INFO: Container pod-readiness-gate ready: false, restart count 0 Jan 15 02:53:20.883: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zg7rx started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.883: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:20.883: INFO: simpletest-rc-to-be-deleted-fldjl started at 2023-01-15 02:52:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:20.883: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:21.711: INFO: Latency metrics for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:53:21.711: INFO: Logging node info for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:53:21.862: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-47-130.ap-northeast-1.compute.internal ea498620-e864-4902-9d30-d8389c23e560 50216 0 2023-01-15 02:18:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-47-130.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-07af92bd9e489a81a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-15 02:18:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-15 02:19:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-15 02:19:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:19:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-15 02:21:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-07af92bd9e489a81a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3918811136 0} {<nil>} 3826964Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3813953536 0} {<nil>} 3724564Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:51:56 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:51:56 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:51:56 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:51:56 +0000 UTC,LastTransitionTime:2023-01-15 02:19:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.47.130,},NodeAddress{Type:ExternalIP,Address:52.193.20.80,},NodeAddress{Type:Hostname,Address:ip-172-20-47-130.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-47-130.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-193-20-80.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec20ce72a358a22f56cbd92c76be5dbf,SystemUUID:ec20ce72-a358-a22f-56cb-d92c76be5dbf,BootID:1bf7fead-d48b-4969-8efe-375fd816430f,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:662209237,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.15 registry.k8s.io/kube-apiserver-amd64:v1.23.15],SizeBytes:135240444,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.15 registry.k8s.io/kube-controller-manager-amd64:v1.23.15],SizeBytes:125046097,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:110016956,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.27.0-alpha.1],SizeBytes:105579814,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.27.0-alpha.1],SizeBytes:102633747,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:57736070,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:55464778,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:54848595,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.15 registry.k8s.io/kube-scheduler-amd64:v1.23.15],SizeBytes:53521430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.27.0-alpha.1],SizeBytes:8787111,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:53:21.862: INFO: Logging kubelet events for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:53:22.015: INFO: Logging pods the kubelet thinks is on node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:53:22.174: INFO: kube-controller-manager-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:22.174: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 15 02:53:22.174: INFO: cilium-2scfb started at 2023-01-15 02:19:20 +0000 UTC (1+1 container statuses recorded) Jan 15 02:53:22.174: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:53:22.174: INFO: Container cilium-agent ready: true, restart count 1 Jan 15 02:53:22.174: INFO: ebs-csi-node-5h6bj started at 2023-01-15 02:19:21 +0000 UTC (0+3 container statuses recorded) Jan 15 02:53:22.174: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:53:22.174: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:53:22.174: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:53:22.174: INFO: kops-controller-jfrgz started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:22.174: INFO: Container kops-controller ready: true, restart count 0 Jan 15 02:53:22.174: INFO: cilium-operator-d47487c76-r5k8h started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:22.174: INFO: Container cilium-operator ready: true, restart count 1 Jan 15 02:53:22.174: INFO: etcd-manager-events-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:22.174: INFO: Container etcd-manager ready: true, restart count 0 Jan 15 02:53:22.174: INFO: etcd-manager-main-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:22.174: INFO: Container etcd-manager ready: true, restart count 0 Jan 15 02:53:22.174: INFO: kube-apiserver-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+2 container statuses recorded) Jan 15 02:53:22.174: INFO: Container healthcheck ready: true, restart count 0 Jan 15 02:53:22.174: INFO: Container kube-apiserver ready: true, restart count 1 Jan 15 02:53:22.174: INFO: kube-scheduler-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:22.174: INFO: Container kube-scheduler ready: true, restart count 0 Jan 15 02:53:22.174: INFO: dns-controller-7db8b74b99-p5n7k started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:22.174: INFO: Container dns-controller ready: true, restart count 0 Jan 15 02:53:22.174: INFO: ebs-csi-controller-6d664d6f54-7h2vq started at 2023-01-15 02:19:21 +0000 UTC (0+5 container statuses recorded) Jan 15 02:53:22.174: INFO: Container csi-attacher ready: true, restart count 0 Jan 15 02:53:22.174: INFO: Container csi-provisioner ready: true, restart count 0 Jan 15 02:53:22.174: INFO: Container csi-resizer ready: true, restart count 0 Jan 15 02:53:22.174: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:53:22.174: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:53:22.656: INFO: Latency metrics for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:53:22.656: INFO: Logging node info for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:53:22.807: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-49-36.ap-northeast-1.compute.internal 26f759b3-6247-49a4-a840-624b157bb011 51408 0 2023-01-15 02:20:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-49-36.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-49-36.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9202":"ip-172-20-49-36.ap-northeast-1.compute.internal","ebs.csi.aws.com":"i-0e51c4a94e7d8d635"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-15 02:46:21 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-15 02:46:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0e51c4a94e7d8d635,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078206976 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973349376 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:48:36 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:48:36 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:48:36 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:48:36 +0000 UTC,LastTransitionTime:2023-01-15 02:20:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.49.36,},NodeAddress{Type:ExternalIP,Address:13.230.104.186,},NodeAddress{Type:Hostname,Address:ip-172-20-49-36.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-49-36.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-230-104-186.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec21c80a557af692a945a70b52b34335,SystemUUID:ec21c80a-557a-f692-a945-a70b52b34335,BootID:382fa58a-a2d1-4a1a-90a9-82a29ddf0315,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-9202^ca6f8149-947e-11ed-a151-a69f1fbca41f kubernetes.io/csi/ebs.csi.aws.com^vol-01f46e370d57b6be1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-9202^ca6f8149-947e-11ed-a151-a69f1fbca41f,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-01f46e370d57b6be1,DevicePath:,},},Config:nil,},} Jan 15 02:53:22.807: INFO: Logging kubelet events for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:53:22.961: INFO: Logging pods the kubelet thinks is on node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:53:23.131: INFO: ebs-csi-node-k85ww started at 2023-01-15 02:20:29 +0000 UTC (0+3 container statuses recorded) Jan 15 02:53:23.131: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:53:23.131: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:53:23.131: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:53:23.131: INFO: liveness-3f2b5578-adeb-4c53-9fda-2c300c80f9f7 started at 2023-01-15 02:53:19 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.131: INFO: Container agnhost-container ready: false, restart count 0 Jan 15 02:53:23.131: INFO: simpletest-rc-to-be-deleted-d6sr6 started at 2023-01-15 02:52:59 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.131: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:23.131: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8nm4v started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.131: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:53:23.131: INFO: simpletest-rc-to-be-deleted-fvk57 started at 2023-01-15 02:52:55 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.131: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:23.131: INFO: simpletest-rc-to-be-deleted-8hzk6 started at 2023-01-15 02:52:55 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.131: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:23.131: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-fnkcp started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.131: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:23.131: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-2pcvv started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.131: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:23.131: INFO: simpletest-rc-to-be-deleted-bgz8j started at 2023-01-15 02:52:55 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.131: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:23.131: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-lkqt4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.131: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:23.131: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-twdj8 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:23.132: INFO: inline-volume-tester-x567b started at 2023-01-15 02:46:20 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 15 02:53:23.132: INFO: csi-mockplugin-0 started at 2023-01-15 02:47:37 +0000 UTC (0+3 container statuses recorded) Jan 15 02:53:23.132: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:53:23.132: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:53:23.132: INFO: Container mock ready: false, restart count 0 Jan 15 02:53:23.132: INFO: csi-mockplugin-resizer-0 started at 2023-01-15 02:47:37 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container csi-resizer ready: true, restart count 0 Jan 15 02:53:23.132: INFO: simpletest-rc-to-be-deleted-c76hj started at 2023-01-15 02:52:55 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:23.132: INFO: simpletest-rc-to-be-deleted-989tz started at 2023-01-15 02:52:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:23.132: INFO: ss2-1 started at 2023-01-15 02:47:49 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container webserver ready: true, restart count 0 Jan 15 02:53:23.132: INFO: inline-volume-tester2-nw4hj started at 2023-01-15 02:48:27 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container csi-volume-tester ready: false, restart count 0 Jan 15 02:53:23.132: INFO: simpletest-rc-to-be-deleted-cvndb started at 2023-01-15 02:52:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:23.132: INFO: up-down-1-l8ptm started at 2023-01-15 02:48:43 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container up-down-1 ready: false, restart count 0 Jan 15 02:53:23.132: INFO: simpletest-rc-to-be-deleted-g9wr9 started at 2023-01-15 02:52:59 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:23.132: INFO: simpletest-rc-to-be-deleted-64czk started at 2023-01-15 02:52:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:23.132: INFO: csi-hostpathplugin-0 started at 2023-01-15 02:41:47 +0000 UTC (0+7 container statuses recorded) Jan 15 02:53:23.132: INFO: Container csi-attacher ready: true, restart count 0 Jan 15 02:53:23.132: INFO: Container csi-provisioner ready: true, restart count 0 Jan 15 02:53:23.132: INFO: Container csi-resizer ready: true, restart count 0 Jan 15 02:53:23.132: INFO: Container csi-snapshotter ready: true, restart count 0 Jan 15 02:53:23.132: INFO: Container hostpath ready: true, restart count 0 Jan 15 02:53:23.132: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:53:23.132: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:53:23.132: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-gnpk2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:23.132: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zfxfr started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:23.132: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-z5vgr started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:23.132: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-85xj4 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:23.132: INFO: simpletest-rc-to-be-deleted-7krqt started at 2023-01-15 02:52:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:23.132: INFO: cilium-fj5wg started at 2023-01-15 02:20:29 +0000 UTC (1+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:53:23.132: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:53:23.132: INFO: netserver-1 started at 2023-01-15 02:53:22 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container webserver ready: false, restart count 0 Jan 15 02:53:23.132: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ksvxw started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:23.132: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:24.144: INFO: Latency metrics for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:53:24.145: INFO: Logging node info for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:53:24.296: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-51-245.ap-northeast-1.compute.internal 29b48a1c-c57d-4410-932e-64e9296beb36 51411 0 2023-01-15 02:20:25 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-51-245.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-51-245.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0c39c90a04ae30573"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-15 02:39:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0c39c90a04ae30573,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:50:13 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:50:13 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:50:13 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:50:13 +0000 UTC,LastTransitionTime:2023-01-15 02:20:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.51.245,},NodeAddress{Type:ExternalIP,Address:18.183.227.37,},NodeAddress{Type:Hostname,Address:ip-172-20-51-245.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-51-245.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-183-227-37.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a7a3540b1bd030cf21ea473814791,SystemUUID:ec2a7a35-40b1-bd03-0cf2-1ea473814791,BootID:adde01e7-fded-49e3-90e6-8f78589ee04d,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:47910609,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:53:24.297: INFO: Logging kubelet events for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:53:24.450: INFO: Logging pods the kubelet thinks is on node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:53:24.612: INFO: simpletest-rc-to-be-deleted-gvkkg started at 2023-01-15 02:52:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:24.612: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-rwqgd started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:24.612: INFO: simpletest-rc-to-be-deleted-ghzmh started at 2023-01-15 02:52:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:24.612: INFO: ss2-2 started at 2023-01-15 02:48:20 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container webserver ready: false, restart count 0 Jan 15 02:53:24.612: INFO: pod-subpath-test-inlinevolume-tbfn started at 2023-01-15 02:51:39 +0000 UTC (2+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Init container init-volume-inlinevolume-tbfn ready: false, restart count 0 Jan 15 02:53:24.612: INFO: Init container test-init-volume-inlinevolume-tbfn ready: false, restart count 0 Jan 15 02:53:24.612: INFO: Container test-container-subpath-inlinevolume-tbfn ready: false, restart count 0 Jan 15 02:53:24.612: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ncnd2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:24.612: INFO: netserver-2 started at 2023-01-15 02:53:22 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container webserver ready: false, restart count 0 Jan 15 02:53:24.612: INFO: simpletest-rc-to-be-deleted-c868s started at 2023-01-15 02:52:55 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:24.612: INFO: coredns-autoscaler-557ccb4c66-2dfbk started at 2023-01-15 02:20:46 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container autoscaler ready: true, restart count 0 Jan 15 02:53:24.612: INFO: simpletest-rc-to-be-deleted-dwjrf started at 2023-01-15 02:52:59 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:24.612: INFO: simpletest-rc-to-be-deleted-h9bk6 started at 2023-01-15 02:52:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:24.612: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4ngm4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:24.612: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-swpkn started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:24.612: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-wwzg2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:24.612: INFO: csi-mockplugin-attacher-0 started at 2023-01-15 02:52:28 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:53:24.612: INFO: csi-mockplugin-resizer-0 started at 2023-01-15 02:52:28 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container csi-resizer ready: false, restart count 0 Jan 15 02:53:24.612: INFO: client-containers-0d74e631-4495-4380-894a-2ae98dd10b07 started at 2023-01-15 02:51:34 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container agnhost-container ready: false, restart count 0 Jan 15 02:53:24.612: INFO: simpletest-rc-to-be-deleted-hdbbk started at 2023-01-15 02:52:55 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:24.612: INFO: coredns-867df8f45c-586xt started at 2023-01-15 02:20:46 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container coredns ready: true, restart count 0 Jan 15 02:53:24.612: INFO: ebs-csi-node-4jd68 started at 2023-01-15 02:20:26 +0000 UTC (0+3 container statuses recorded) Jan 15 02:53:24.612: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:53:24.612: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:53:24.612: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:53:24.612: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-b5gxq started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:24.612: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-w8xjs started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:24.612: INFO: cilium-wmrqz started at 2023-01-15 02:20:26 +0000 UTC (1+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:53:24.612: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:53:24.612: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4rvct started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:24.612: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-vjzvp started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:24.612: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zzf5k started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:24.612: INFO: simpletest-rc-to-be-deleted-bdc7d started at 2023-01-15 02:52:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:24.612: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:24.612: INFO: csi-mockplugin-0 started at 2023-01-15 02:52:27 +0000 UTC (0+3 container statuses recorded) Jan 15 02:53:24.612: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:53:24.612: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:53:24.612: INFO: Container mock ready: false, restart count 0 Jan 15 02:53:25.457: INFO: Latency metrics for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:53:25.457: INFO: Logging node info for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:53:25.608: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-62-124.ap-northeast-1.compute.internal c1941ec4-d9e4-4772-a010-70d33526ed92 51423 0 2023-01-15 02:20:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-62-124.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-62-124.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0d77114c1af529cd3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-15 02:52:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0d77114c1af529cd3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:52:21 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:52:21 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:52:21 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:52:21 +0000 UTC,LastTransitionTime:2023-01-15 02:20:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.62.124,},NodeAddress{Type:ExternalIP,Address:13.115.110.140,},NodeAddress{Type:Hostname,Address:ip-172-20-62-124.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-62-124.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-115-110-140.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2c70e0dbd67c31e6dd0538493d8c3c,SystemUUID:ec2c70e0-dbd6-7c31-e6dd-0538493d8c3c,BootID:981078cd-0f59-4a44-8062-deda147f2986,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:299475478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:56504541,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:53:25.609: INFO: Logging kubelet events for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:53:25.763: INFO: Logging pods the kubelet thinks is on node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:53:25.923: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-m72pw started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.923: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:25.925: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-74jlv started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:25.925: INFO: simpletest-rc-to-be-deleted-h98w6 started at 2023-01-15 02:52:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:25.925: INFO: coredns-867df8f45c-bk6lp started at 2023-01-15 02:21:13 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container coredns ready: true, restart count 0 Jan 15 02:53:25.925: INFO: netserver-3 started at 2023-01-15 02:53:22 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container webserver ready: false, restart count 0 Jan 15 02:53:25.925: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ftqzz started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:25.925: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-q75dx started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:25.925: INFO: up-down-1-hn8km started at 2023-01-15 02:48:43 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container up-down-1 ready: true, restart count 0 Jan 15 02:53:25.925: INFO: simpletest-rc-to-be-deleted-gw4w4 started at 2023-01-15 02:52:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:25.925: INFO: simpletest-rc-to-be-deleted-g7xkk started at 2023-01-15 02:52:59 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:25.925: INFO: ebs-csi-node-5qjb6 started at 2023-01-15 02:20:28 +0000 UTC (0+3 container statuses recorded) Jan 15 02:53:25.925: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:53:25.925: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:53:25.925: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:53:25.925: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-l4dnx started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:25.925: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-6bj8q started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:25.925: INFO: metadata-volume-a1d890c5-b9f7-4e8c-8891-9a05addd5332 started at 2023-01-15 02:52:38 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container client-container ready: false, restart count 0 Jan 15 02:53:25.925: INFO: simpletest-rc-to-be-deleted-d6hq4 started at 2023-01-15 02:52:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:25.925: INFO: simpletest-rc-to-be-deleted-d2f8m started at 2023-01-15 02:52:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:25.925: INFO: cilium-j8bh4 started at 2023-01-15 02:20:28 +0000 UTC (1+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:53:25.925: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:53:25.925: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8pdcw started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:25.925: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4kzd6 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:25.925: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-j92t4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:25.925: INFO: hostpath-symlink-prep-provisioning-5455 started at 2023-01-15 02:48:25 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container init-volume-provisioning-5455 ready: false, restart count 0 Jan 15 02:53:25.925: INFO: simpletest-rc-to-be-deleted-g4lhl started at 2023-01-15 02:52:55 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container nginx ready: false, restart count 0 Jan 15 02:53:25.925: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-hmpns started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:53:25.925: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:53:26.451: INFO: Latency metrics for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:53:26.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-5964" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDeployment\stest\sDeployment\sReplicaSet\sorphaning\sand\sadoption\sregarding\scontrollerRef$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:136 Jan 15 02:45:25.007: Unexpected error: <*errors.errorString | 0xc00229f190>: { s: "error waiting for deployment \"test-orphan-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"test-orphan-deployment-5d9fdcc779\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}", } error waiting for deployment "test-orphan-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:1131from junit_09.xml
[BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 15 02:40:23.359: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] test Deployment ReplicaSet orphaning and adoption regarding controllerRef /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:136 Jan 15 02:40:24.405: INFO: Creating Deployment "test-orphan-deployment" Jan 15 02:40:24.705: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:26.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:28.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:30.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:32.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:34.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:36.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:38.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:40.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:42.861: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:44.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:46.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:48.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:50.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:52.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:54.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:56.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:40:58.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:00.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:02.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:04.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:06.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:08.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:10.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:12.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:14.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:16.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:18.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:20.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:22.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:24.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:26.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:28.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:30.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:32.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:34.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:36.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:38.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:40.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:42.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:44.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:46.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:48.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:50.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:52.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:54.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:56.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:41:58.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:00.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:02.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:04.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:06.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:08.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:10.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:12.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:14.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:16.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:18.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:20.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:22.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:24.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:26.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:28.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:30.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:32.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:34.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:36.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:38.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:40.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:42.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:44.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:46.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:48.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:50.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:52.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:54.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:56.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:42:58.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:00.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:02.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:04.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:06.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:08.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:10.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:12.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:14.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:16.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:18.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:20.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:22.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:24.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:26.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:28.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:30.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:32.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:34.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:36.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:38.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:40.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:42.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:44.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:46.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:48.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:50.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:52.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:54.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:56.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:43:58.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:00.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:02.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:04.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:06.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:08.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:10.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:12.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:14.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:16.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:18.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:20.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:22.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:24.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:26.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:28.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:30.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:32.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:34.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:36.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:38.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:40.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:42.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:44.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:46.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:48.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:50.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:52.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:54.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:56.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:44:58.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:00.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:02.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:04.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:06.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:08.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:10.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:12.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:14.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:16.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:18.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:20.855: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:22.857: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:24.856: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:25.006: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 15 02:45:25.007: FAIL: Unexpected error: <*errors.errorString | 0xc00229f190>: { s: "error waiting for deployment \"test-orphan-deployment\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:\"ReplicaSetUpdated\", Message:\"ReplicaSet \\\"test-orphan-deployment-5d9fdcc779\\\" is progressing.\"}}, CollisionCount:(*int32)(nil)}", } error waiting for deployment "test-orphan-deployment" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 15, 2, 40, 24, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-orphan-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apps.testDeploymentsControllerRef(0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:1131 +0x465 k8s.io/kubernetes/test/e2e/apps.glob..func4.9() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:137 +0x1d k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0007f41a0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 15 02:45:25.158: INFO: Deployment "test-orphan-deployment": &Deployment{ObjectMeta:{test-orphan-deployment deployment-7152 90b07d9b-5b80-45ba-9517-e84922813d67 45042 1 2023-01-15 02:40:24 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-01-15 02:40:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-15 02:40:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001f307c8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-15 02:40:24 +0000 UTC,LastTransitionTime:2023-01-15 02:40:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-orphan-deployment-5d9fdcc779" is progressing.,LastUpdateTime:2023-01-15 02:40:24 +0000 UTC,LastTransitionTime:2023-01-15 02:40:24 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 15 02:45:25.308: INFO: New ReplicaSet "test-orphan-deployment-5d9fdcc779" of Deployment "test-orphan-deployment": &ReplicaSet{ObjectMeta:{test-orphan-deployment-5d9fdcc779 deployment-7152 ccfb067f-7e54-4294-b848-573c57e2945f 45040 1 2023-01-15 02:40:24 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-orphan-deployment 90b07d9b-5b80-45ba-9517-e84922813d67 0xc001f30ba7 0xc001f30ba8}] [] [{kube-controller-manager Update apps/v1 2023-01-15 02:40:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"90b07d9b-5b80-45ba-9517-e84922813d67\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-15 02:40:24 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001f30c38 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 15 02:45:25.458: INFO: Pod "test-orphan-deployment-5d9fdcc779-bqvmn" is not available: &Pod{ObjectMeta:{test-orphan-deployment-5d9fdcc779-bqvmn test-orphan-deployment-5d9fdcc779- deployment-7152 2d97dda2-aa3b-4f11-bdd0-dca604a4489e 45038 0 2023-01-15 02:40:24 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-orphan-deployment-5d9fdcc779 ccfb067f-7e54-4294-b848-573c57e2945f 0xc002ae17e7 0xc002ae17e8}] [] [{kube-controller-manager Update v1 2023-01-15 02:40:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ccfb067f-7e54-4294-b848-573c57e2945f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-15 02:40:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gktxp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gktxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:ip-172-20-43-202.ap-northeast-1.compute.internal,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-15 02:40:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-15 02:40:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-15 02:40:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-15 02:40:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.20.43.202,PodIP:,StartTime:2023-01-15 02:40:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "deployment-7152". �[1mSTEP�[0m: Found 14 events. Jan 15 02:45:25.608: INFO: At 2023-01-15 02:40:24 +0000 UTC - event for test-orphan-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set test-orphan-deployment-5d9fdcc779 to 1 Jan 15 02:45:25.608: INFO: At 2023-01-15 02:40:24 +0000 UTC - event for test-orphan-deployment-5d9fdcc779: {replicaset-controller } SuccessfulCreate: Created pod: test-orphan-deployment-5d9fdcc779-bqvmn Jan 15 02:45:25.608: INFO: At 2023-01-15 02:40:24 +0000 UTC - event for test-orphan-deployment-5d9fdcc779-bqvmn: {default-scheduler } Scheduled: Successfully assigned deployment-7152/test-orphan-deployment-5d9fdcc779-bqvmn to ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:45:25.608: INFO: At 2023-01-15 02:40:26 +0000 UTC - event for test-orphan-deployment-5d9fdcc779-bqvmn: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0bfc1a4dc43a71add7aee0fb4073e77ba5e3ef9da60758529d1e8966579a061b" network for pod "test-orphan-deployment-5d9fdcc779-bqvmn": networkPlugin cni failed to set up pod "test-orphan-deployment-5d9fdcc779-bqvmn_deployment-7152" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:25.608: INFO: At 2023-01-15 02:40:27 +0000 UTC - event for test-orphan-deployment-5d9fdcc779-bqvmn: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 15 02:45:25.608: INFO: At 2023-01-15 02:40:29 +0000 UTC - event for test-orphan-deployment-5d9fdcc779-bqvmn: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "71c590868f56eac354fa58ab24a022f28084f56e0839d111a86cafcb58be8e2c" network for pod "test-orphan-deployment-5d9fdcc779-bqvmn": networkPlugin cni failed to set up pod "test-orphan-deployment-5d9fdcc779-bqvmn_deployment-7152" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:25.608: INFO: At 2023-01-15 02:40:31 +0000 UTC - event for test-orphan-deployment-5d9fdcc779-bqvmn: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "31da77fb54403b40b2e2d467b45a892cb3e75a3ad20e0f9562a7af09dc93e6c0" network for pod "test-orphan-deployment-5d9fdcc779-bqvmn": networkPlugin cni failed to set up pod "test-orphan-deployment-5d9fdcc779-bqvmn_deployment-7152" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:25.608: INFO: At 2023-01-15 02:40:37 +0000 UTC - event for test-orphan-deployment-5d9fdcc779-bqvmn: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "67e1e64da394dfbb4ee2d5fac2930f85a35606bd2984c197960dac41d1232dab" network for pod "test-orphan-deployment-5d9fdcc779-bqvmn": networkPlugin cni failed to set up pod "test-orphan-deployment-5d9fdcc779-bqvmn_deployment-7152" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:25.608: INFO: At 2023-01-15 02:40:40 +0000 UTC - event for test-orphan-deployment-5d9fdcc779-bqvmn: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a7b97a198f3d24a7d23d94440cb3cc030476562ca6ebdcc501044d104a5d0fc7" network for pod "test-orphan-deployment-5d9fdcc779-bqvmn": networkPlugin cni failed to set up pod "test-orphan-deployment-5d9fdcc779-bqvmn_deployment-7152" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:25.608: INFO: At 2023-01-15 02:40:43 +0000 UTC - event for test-orphan-deployment-5d9fdcc779-bqvmn: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "06cbf6e0f401221c4c239e6237d198862314f0f1276ab36d39e8e657228343a6" network for pod "test-orphan-deployment-5d9fdcc779-bqvmn": networkPlugin cni failed to set up pod "test-orphan-deployment-5d9fdcc779-bqvmn_deployment-7152" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:25.608: INFO: At 2023-01-15 02:40:46 +0000 UTC - event for test-orphan-deployment-5d9fdcc779-bqvmn: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9d22186d333ce9f798bdfb423b94f1f55646a32cb0e1a14b36d12a13f15d4ddf" network for pod "test-orphan-deployment-5d9fdcc779-bqvmn": networkPlugin cni failed to set up pod "test-orphan-deployment-5d9fdcc779-bqvmn_deployment-7152" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:25.608: INFO: At 2023-01-15 02:40:50 +0000 UTC - event for test-orphan-deployment-5d9fdcc779-bqvmn: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c46523055fac3aa82e78ef727afd0502567fa67ba175d17fa3db18d64758c80f" network for pod "test-orphan-deployment-5d9fdcc779-bqvmn": networkPlugin cni failed to set up pod "test-orphan-deployment-5d9fdcc779-bqvmn_deployment-7152" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:25.608: INFO: At 2023-01-15 02:40:53 +0000 UTC - event for test-orphan-deployment-5d9fdcc779-bqvmn: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "cbe2a9f76983144d3edddacc61ccd915192bb140c4245d6d316e319ead856c3e" network for pod "test-orphan-deployment-5d9fdcc779-bqvmn": networkPlugin cni failed to set up pod "test-orphan-deployment-5d9fdcc779-bqvmn_deployment-7152" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:25.608: INFO: At 2023-01-15 02:41:13 +0000 UTC - event for test-orphan-deployment-5d9fdcc779-bqvmn: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "53d9cbeb17e48f4ed9f06da7622e4f571ebefe5ccf9b03402724492245d347a4" network for pod "test-orphan-deployment-5d9fdcc779-bqvmn": networkPlugin cni failed to set up pod "test-orphan-deployment-5d9fdcc779-bqvmn_deployment-7152" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:25.758: INFO: POD NODE PHASE GRACE CONDITIONS Jan 15 02:45:25.758: INFO: test-orphan-deployment-5d9fdcc779-bqvmn ip-172-20-43-202.ap-northeast-1.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:40:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:40:24 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:40:24 +0000 UTC ContainersNotReady containers with unready status: [httpd]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:40:24 +0000 UTC }] Jan 15 02:45:25.758: INFO: Jan 15 02:45:25.909: INFO: Unable to fetch deployment-7152/test-orphan-deployment-5d9fdcc779-bqvmn/httpd logs: the server rejected our request for an unknown reason (get pods test-orphan-deployment-5d9fdcc779-bqvmn) Jan 15 02:45:26.060: INFO: Logging node info for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:45:26.209: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-43-202.ap-northeast-1.compute.internal bc0ee9cc-d624-47c0-9b67-04d40afa9f71 46408 0 2023-01-15 02:20:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-43-202.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-43-202.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7187":"csi-mock-csi-mock-volumes-7187","ebs.csi.aws.com":"i-011b6dcd6ec6449ff"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-15 02:40:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {kube-controller-manager Update v1 2023-01-15 02:42:39 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-011b6dcd6ec6449ff,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078206976 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973349376 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:41:58 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:41:58 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:41:58 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:41:58 +0000 UTC,LastTransitionTime:2023-01-15 02:20:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.43.202,},NodeAddress{Type:ExternalIP,Address:13.231.182.149,},NodeAddress{Type:Hostname,Address:ip-172-20-43-202.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-43-202.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-231-182-149.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec22781cfe518dfd12a3a957bb6e997a,SystemUUID:ec22781c-fe51-8dfd-12a3-a957bb6e997a,BootID:949cecd1-b820-47c2-afb4-e8715b8e9ccd,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:42321438,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-04fd6a24339ad94ba,DevicePath:,},},Config:nil,},} Jan 15 02:45:26.210: INFO: Logging kubelet events for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:45:26.362: INFO: Logging pods the kubelet thinks is on node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:45:26.522: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-kspn8 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:26.523: INFO: test-orphan-deployment-5d9fdcc779-bqvmn started at 2023-01-15 02:40:24 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container httpd ready: false, restart count 0 Jan 15 02:45:26.523: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zg7rx started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:26.523: INFO: netserver-0 started at 2023-01-15 02:39:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container webserver ready: true, restart count 0 Jan 15 02:45:26.523: INFO: pod-host-path-test started at 2023-01-15 02:45:18 +0000 UTC (0+2 container statuses recorded) Jan 15 02:45:26.523: INFO: Container test-container-1 ready: false, restart count 0 Jan 15 02:45:26.523: INFO: Container test-container-2 ready: false, restart count 0 Jan 15 02:45:26.523: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-gx8nq started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:26.523: INFO: netserver-0 started at 2023-01-15 02:42:25 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:26.523: INFO: csi-mockplugin-0 started at 2023-01-15 02:40:08 +0000 UTC (0+4 container statuses recorded) Jan 15 02:45:26.523: INFO: Container busybox ready: true, restart count 0 Jan 15 02:45:26.523: INFO: Container csi-provisioner ready: true, restart count 0 Jan 15 02:45:26.523: INFO: Container driver-registrar ready: true, restart count 0 Jan 15 02:45:26.523: INFO: Container mock ready: true, restart count 0 Jan 15 02:45:26.523: INFO: service-proxy-disabled-w9tn2 started at 2023-01-15 02:43:29 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container service-proxy-disabled ready: false, restart count 0 Jan 15 02:45:26.523: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-94mx6 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:26.523: INFO: cilium-nwjl7 started at 2023-01-15 02:20:28 +0000 UTC (1+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:45:26.523: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:45:26.523: INFO: pvc-volume-tester-94cwc started at 2023-01-15 02:41:52 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container volume-tester ready: false, restart count 0 Jan 15 02:45:26.523: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-5fshl started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:26.523: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-5qrcs started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:45:26.523: INFO: pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a started at 2023-01-15 02:42:36 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:45:26.523: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-jpthm started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:26.523: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-clxvc started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:26.523: INFO: pod-subpath-test-inlinevolume-mv6s started at 2023-01-15 02:42:42 +0000 UTC (1+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Init container init-volume-inlinevolume-mv6s ready: false, restart count 0 Jan 15 02:45:26.523: INFO: Container test-container-subpath-inlinevolume-mv6s ready: false, restart count 0 Jan 15 02:45:26.523: INFO: ebs-csi-node-bwkxc started at 2023-01-15 02:20:28 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:26.523: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:45:26.523: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:45:26.523: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:45:26.523: INFO: netserver-0 started at 2023-01-15 02:40:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container webserver ready: true, restart count 0 Jan 15 02:45:26.523: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-rjqfv started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:26.523: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8sg5j started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:26.523: INFO: pod-secrets-a94137a6-dc2b-4086-a3eb-f2cff844958f started at 2023-01-15 02:45:23 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:26.523: INFO: Container secret-volume-test ready: false, restart count 0 Jan 15 02:45:27.068: INFO: Latency metrics for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:45:27.068: INFO: Logging node info for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:45:27.218: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-47-130.ap-northeast-1.compute.internal ea498620-e864-4902-9d30-d8389c23e560 46002 0 2023-01-15 02:18:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-47-130.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-07af92bd9e489a81a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-15 02:18:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-15 02:19:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-15 02:19:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:19:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-15 02:21:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-07af92bd9e489a81a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3918811136 0} {<nil>} 3826964Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3813953536 0} {<nil>} 3724564Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:41:44 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:41:44 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:41:44 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:41:44 +0000 UTC,LastTransitionTime:2023-01-15 02:19:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.47.130,},NodeAddress{Type:ExternalIP,Address:52.193.20.80,},NodeAddress{Type:Hostname,Address:ip-172-20-47-130.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-47-130.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-193-20-80.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec20ce72a358a22f56cbd92c76be5dbf,SystemUUID:ec20ce72-a358-a22f-56cb-d92c76be5dbf,BootID:1bf7fead-d48b-4969-8efe-375fd816430f,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:662209237,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.15 registry.k8s.io/kube-apiserver-amd64:v1.23.15],SizeBytes:135240444,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.15 registry.k8s.io/kube-controller-manager-amd64:v1.23.15],SizeBytes:125046097,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:110016956,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.27.0-alpha.1],SizeBytes:105579814,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.27.0-alpha.1],SizeBytes:102633747,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:57736070,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:55464778,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:54848595,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.15 registry.k8s.io/kube-scheduler-amd64:v1.23.15],SizeBytes:53521430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.27.0-alpha.1],SizeBytes:8787111,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:45:27.218: INFO: Logging kubelet events for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:45:27.370: INFO: Logging pods the kubelet thinks is on node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:45:27.529: INFO: kube-controller-manager-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:27.529: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 15 02:45:27.529: INFO: cilium-2scfb started at 2023-01-15 02:19:20 +0000 UTC (1+1 container statuses recorded) Jan 15 02:45:27.529: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:45:27.529: INFO: Container cilium-agent ready: true, restart count 1 Jan 15 02:45:27.529: INFO: ebs-csi-node-5h6bj started at 2023-01-15 02:19:21 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:27.529: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:45:27.529: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:45:27.529: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:45:27.529: INFO: kops-controller-jfrgz started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:27.529: INFO: Container kops-controller ready: true, restart count 0 Jan 15 02:45:27.529: INFO: cilium-operator-d47487c76-r5k8h started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:27.529: INFO: Container cilium-operator ready: true, restart count 1 Jan 15 02:45:27.529: INFO: etcd-manager-events-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:27.529: INFO: Container etcd-manager ready: true, restart count 0 Jan 15 02:45:27.529: INFO: etcd-manager-main-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:27.529: INFO: Container etcd-manager ready: true, restart count 0 Jan 15 02:45:27.529: INFO: kube-apiserver-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+2 container statuses recorded) Jan 15 02:45:27.529: INFO: Container healthcheck ready: true, restart count 0 Jan 15 02:45:27.529: INFO: Container kube-apiserver ready: true, restart count 1 Jan 15 02:45:27.529: INFO: kube-scheduler-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:27.529: INFO: Container kube-scheduler ready: true, restart count 0 Jan 15 02:45:27.529: INFO: dns-controller-7db8b74b99-p5n7k started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:27.529: INFO: Container dns-controller ready: true, restart count 0 Jan 15 02:45:27.529: INFO: ebs-csi-controller-6d664d6f54-7h2vq started at 2023-01-15 02:19:21 +0000 UTC (0+5 container statuses recorded) Jan 15 02:45:27.529: INFO: Container csi-attacher ready: true, restart count 0 Jan 15 02:45:27.529: INFO: Container csi-provisioner ready: true, restart count 0 Jan 15 02:45:27.529: INFO: Container csi-resizer ready: true, restart count 0 Jan 15 02:45:27.529: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:45:27.529: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:45:28.002: INFO: Latency metrics for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:45:28.002: INFO: Logging node info for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:45:28.152: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-49-36.ap-northeast-1.compute.internal 26f759b3-6247-49a4-a840-624b157bb011 47006 0 2023-01-15 02:20:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-49-36.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-49-36.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0e51c4a94e7d8d635"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-15 02:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {e2e.test Update v1 2023-01-15 02:39:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0e51c4a94e7d8d635,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078206976 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973349376 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:01 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:01 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:01 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:45:01 +0000 UTC,LastTransitionTime:2023-01-15 02:20:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.49.36,},NodeAddress{Type:ExternalIP,Address:13.230.104.186,},NodeAddress{Type:Hostname,Address:ip-172-20-49-36.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-49-36.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-230-104-186.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec21c80a557af692a945a70b52b34335,SystemUUID:ec21c80a-557a-f692-a945-a70b52b34335,BootID:382fa58a-a2d1-4a1a-90a9-82a29ddf0315,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:45:28.152: INFO: Logging kubelet events for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:45:28.304: INFO: Logging pods the kubelet thinks is on node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:45:28.484: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8nm4v started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.484: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:45:28.484: INFO: pod-1702457c-6c51-477b-8840-878626395835 started at 2023-01-15 02:40:51 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.484: INFO: Container test-container ready: false, restart count 0 Jan 15 02:45:28.484: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-fnkcp started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.484: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:28.484: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-2pcvv started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.484: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:28.484: INFO: hostexec-ip-172-20-49-36.ap-northeast-1.compute.internal-56fhf started at 2023-01-15 02:45:27 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.484: INFO: Container agnhost-container ready: false, restart count 0 Jan 15 02:45:28.485: INFO: netserver-1 started at 2023-01-15 02:40:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:28.485: INFO: pod-ed4dba04-9147-4141-a040-7a3a19a166d3 started at 2023-01-15 02:40:37 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:45:28.485: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-lkqt4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:28.485: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-twdj8 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:28.485: INFO: netserver-1 started at 2023-01-15 02:39:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container webserver ready: true, restart count 0 Jan 15 02:45:28.485: INFO: execpod2nfv6 started at 2023-01-15 02:40:16 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container agnhost-container ready: false, restart count 0 Jan 15 02:45:28.485: INFO: liveness-3943d642-76a2-4688-ab80-3fd44e1c546d started at 2023-01-15 02:40:13 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container agnhost-container ready: true, restart count 1 Jan 15 02:45:28.485: INFO: pod-init-02bbd522-cfbe-4f6f-a59f-4342588923f2 started at 2023-01-15 02:41:11 +0000 UTC (2+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Init container init1 ready: false, restart count 0 Jan 15 02:45:28.485: INFO: Init container init2 ready: false, restart count 0 Jan 15 02:45:28.485: INFO: Container run1 ready: false, restart count 0 Jan 15 02:45:28.485: INFO: hostexec-ip-172-20-49-36.ap-northeast-1.compute.internal-fbxxx started at 2023-01-15 02:40:28 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:45:28.485: INFO: netserver-1 started at 2023-01-15 02:42:25 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:28.485: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-z5vgr started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:28.485: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-85xj4 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:28.485: INFO: csi-hostpathplugin-0 started at 2023-01-15 02:41:47 +0000 UTC (0+7 container statuses recorded) Jan 15 02:45:28.485: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:45:28.485: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:45:28.485: INFO: Container csi-resizer ready: false, restart count 0 Jan 15 02:45:28.485: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 15 02:45:28.485: INFO: Container hostpath ready: false, restart count 0 Jan 15 02:45:28.485: INFO: Container liveness-probe ready: false, restart count 0 Jan 15 02:45:28.485: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 15 02:45:28.485: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-gnpk2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:28.485: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zfxfr started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:28.485: INFO: cilium-fj5wg started at 2023-01-15 02:20:29 +0000 UTC (1+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:45:28.485: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:45:28.485: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ksvxw started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:28.485: INFO: ebs-csi-node-k85ww started at 2023-01-15 02:20:29 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:28.485: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:45:28.485: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:45:28.485: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:45:28.485: INFO: busybox-privileged-true-0b68ae16-fa00-43df-aa4c-47deacec4b1f started at 2023-01-15 02:40:27 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:28.485: INFO: Container busybox-privileged-true-0b68ae16-fa00-43df-aa4c-47deacec4b1f ready: false, restart count 0 Jan 15 02:45:29.034: INFO: Latency metrics for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:45:29.034: INFO: Logging node info for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:45:29.184: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-51-245.ap-northeast-1.compute.internal 29b48a1c-c57d-4410-932e-64e9296beb36 47028 0 2023-01-15 02:20:25 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-51-245.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-51-245.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0c39c90a04ae30573"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-15 02:39:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0c39c90a04ae30573,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:06 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:06 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:06 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:45:06 +0000 UTC,LastTransitionTime:2023-01-15 02:20:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.51.245,},NodeAddress{Type:ExternalIP,Address:18.183.227.37,},NodeAddress{Type:Hostname,Address:ip-172-20-51-245.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-51-245.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-183-227-37.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a7a3540b1bd030cf21ea473814791,SystemUUID:ec2a7a35-40b1-bd03-0cf2-1ea473814791,BootID:adde01e7-fded-49e3-90e6-8f78589ee04d,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:47910609,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:45:29.184: INFO: Logging kubelet events for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:45:29.337: INFO: Logging pods the kubelet thinks is on node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:45:29.498: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ncnd2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.498: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:29.498: INFO: netserver-2 started at 2023-01-15 02:39:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.498: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:29.498: INFO: coredns-autoscaler-557ccb4c66-2dfbk started at 2023-01-15 02:20:46 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.498: INFO: Container autoscaler ready: true, restart count 0 Jan 15 02:45:29.498: INFO: pod-1df18e3d-83da-4374-9fdb-7c83a860b73f started at 2023-01-15 02:43:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.498: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:45:29.498: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-swpkn started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.498: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:29.498: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-wwzg2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.498: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:29.498: INFO: pod-939baa6e-ab17-40de-8869-7bc7ca5c340e started at 2023-01-15 02:41:13 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.498: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:45:29.498: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4ngm4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.498: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:29.498: INFO: coredns-867df8f45c-586xt started at 2023-01-15 02:20:46 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.499: INFO: Container coredns ready: true, restart count 0 Jan 15 02:45:29.499: INFO: hostexec-ip-172-20-51-245.ap-northeast-1.compute.internal-svs4t started at 2023-01-15 02:41:09 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.499: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:45:29.499: INFO: netserver-2 started at 2023-01-15 02:42:25 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.499: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:29.499: INFO: service-proxy-disabled-qkqj4 started at 2023-01-15 02:43:29 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.499: INFO: Container service-proxy-disabled ready: false, restart count 0 Jan 15 02:45:29.499: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-b5gxq started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.499: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:29.499: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-w8xjs started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.499: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:29.499: INFO: cilium-wmrqz started at 2023-01-15 02:20:26 +0000 UTC (1+1 container statuses recorded) Jan 15 02:45:29.499: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:45:29.499: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:45:29.499: INFO: ebs-csi-node-4jd68 started at 2023-01-15 02:20:26 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:29.499: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:45:29.499: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:45:29.499: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:45:29.499: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-vjzvp started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.499: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:45:29.499: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zzf5k started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.499: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:29.499: INFO: hostexec-ip-172-20-51-245.ap-northeast-1.compute.internal-hv7d9 started at 2023-01-15 02:43:25 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.499: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:45:29.499: INFO: netserver-2 started at 2023-01-15 02:40:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.499: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:29.499: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4rvct started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.499: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:29.499: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-rwqgd started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:29.499: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:30.149: INFO: Latency metrics for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:45:30.149: INFO: Logging node info for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:45:30.299: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-62-124.ap-northeast-1.compute.internal c1941ec4-d9e4-4772-a010-70d33526ed92 47127 0 2023-01-15 02:20:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-62-124.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-62-124.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0d77114c1af529cd3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-15 02:40:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0d77114c1af529cd3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:22 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:22 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:22 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:45:22 +0000 UTC,LastTransitionTime:2023-01-15 02:20:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.62.124,},NodeAddress{Type:ExternalIP,Address:13.115.110.140,},NodeAddress{Type:Hostname,Address:ip-172-20-62-124.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-62-124.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-115-110-140.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2c70e0dbd67c31e6dd0538493d8c3c,SystemUUID:ec2c70e0-dbd6-7c31-e6dd-0538493d8c3c,BootID:981078cd-0f59-4a44-8062-deda147f2986,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:299475478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:56504541,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:45:30.299: INFO: Logging kubelet events for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:45:30.452: INFO: Logging pods the kubelet thinks is on node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:45:30.615: INFO: netserver-3 started at 2023-01-15 02:40:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container webserver ready: true, restart count 0 Jan 15 02:45:30.615: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-q75dx started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:30.615: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ftqzz started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:30.615: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-l4dnx started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:30.615: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-6bj8q started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:45:30.615: INFO: pod-projected-configmaps-33eb312a-7b6e-41ff-961c-fcdd3860dfcc started at 2023-01-15 02:40:39 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container agnhost-container ready: false, restart count 0 Jan 15 02:45:30.615: INFO: netserver-3 started at 2023-01-15 02:42:25 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:30.615: INFO: cilium-j8bh4 started at 2023-01-15 02:20:28 +0000 UTC (1+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:45:30.615: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:45:30.615: INFO: ebs-csi-node-5qjb6 started at 2023-01-15 02:20:28 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:30.615: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:45:30.615: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:45:30.615: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:45:30.615: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8pdcw started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:30.615: INFO: netserver-3 started at 2023-01-15 02:39:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container webserver ready: true, restart count 0 Jan 15 02:45:30.615: INFO: csi-mockplugin-0 started at 2023-01-15 02:40:59 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:30.615: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:45:30.615: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:45:30.615: INFO: Container mock ready: false, restart count 0 Jan 15 02:45:30.615: INFO: pod-8ff4935b-10ec-4909-b5d3-90e2c946eb34 started at 2023-01-15 02:41:51 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:45:30.615: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4kzd6 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:30.615: INFO: csi-mockplugin-0 started at 2023-01-15 02:42:38 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:30.615: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:45:30.615: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:45:30.615: INFO: Container mock ready: false, restart count 0 Jan 15 02:45:30.615: INFO: csi-mockplugin-attacher-0 started at 2023-01-15 02:41:00 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:45:30.615: INFO: hostexec-ip-172-20-62-124.ap-northeast-1.compute.internal-p7z8r started at 2023-01-15 02:41:32 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:45:30.615: INFO: service-proxy-disabled-vlcxq started at 2023-01-15 02:43:29 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container service-proxy-disabled ready: false, restart count 0 Jan 15 02:45:30.615: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-hmpns started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:30.615: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-j92t4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:30.615: INFO: csi-mockplugin-attacher-0 started at 2023-01-15 02:42:38 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:45:30.615: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-74jlv started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:30.615: INFO: coredns-867df8f45c-bk6lp started at 2023-01-15 02:21:13 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container coredns ready: true, restart count 0 Jan 15 02:45:30.615: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-m72pw started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:30.615: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:31.138: INFO: Latency metrics for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:45:31.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-7152" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cli\]\sKubectl\sclient\sUpdate\sDemo\sshould\sscale\sa\sreplication\scontroller\s\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 15 02:45:13.541: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:335from junit_14.xml
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":223,"failed":0} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 15 02:39:45.036: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Jan 15 02:39:46.097: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 create -f -' Jan 15 02:39:48.167: INFO: stderr: "" Jan 15 02:39:48.167: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 15 02:39:48.167: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:39:48.714: INFO: stderr: "" Jan 15 02:39:48.714: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-mtr8s " Jan 15 02:39:48.714: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:39:49.232: INFO: stderr: "" Jan 15 02:39:49.232: INFO: stdout: "" Jan 15 02:39:49.232: INFO: update-demo-nautilus-2rsjf is created but not running Jan 15 02:39:54.232: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:39:54.759: INFO: stderr: "" Jan 15 02:39:54.759: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-mtr8s " Jan 15 02:39:54.759: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:39:55.299: INFO: stderr: "" Jan 15 02:39:55.299: INFO: stdout: "true" Jan 15 02:39:55.299: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:39:55.831: INFO: stderr: "" Jan 15 02:39:55.831: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:39:55.831: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:39:55.983: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:39:55.984: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:39:55.984: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:39:55.984: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-mtr8s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:39:56.500: INFO: stderr: "" Jan 15 02:39:56.500: INFO: stdout: "true" Jan 15 02:39:56.500: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-mtr8s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:39:57.043: INFO: stderr: "" Jan 15 02:39:57.043: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:39:57.043: INFO: validating pod update-demo-nautilus-mtr8s Jan 15 02:39:57.195: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:39:57.195: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:39:57.195: INFO: update-demo-nautilus-mtr8s is verified up and running �[1mSTEP�[0m: scaling down the replication controller Jan 15 02:39:57.198: INFO: scanned /root for discovery docs: <nil> Jan 15 02:39:57.198: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Jan 15 02:39:59.179: INFO: stderr: "" Jan 15 02:39:59.179: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 15 02:39:59.179: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:39:59.735: INFO: stderr: "" Jan 15 02:39:59.735: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-mtr8s " �[1mSTEP�[0m: Replicas for name=update-demo: expected=1 actual=2 Jan 15 02:40:04.736: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:40:05.257: INFO: stderr: "" Jan 15 02:40:05.257: INFO: stdout: "update-demo-nautilus-2rsjf " Jan 15 02:40:05.257: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:05.776: INFO: stderr: "" Jan 15 02:40:05.776: INFO: stdout: "true" Jan 15 02:40:05.776: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:40:06.307: INFO: stderr: "" Jan 15 02:40:06.307: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:40:06.307: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:40:06.508: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:40:06.508: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:40:06.508: INFO: update-demo-nautilus-2rsjf is verified up and running �[1mSTEP�[0m: scaling up the replication controller Jan 15 02:40:06.510: INFO: scanned /root for discovery docs: <nil> Jan 15 02:40:06.510: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Jan 15 02:40:07.336: INFO: stderr: "" Jan 15 02:40:07.336: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 15 02:40:07.336: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:40:07.864: INFO: stderr: "" Jan 15 02:40:07.864: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:40:07.864: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:08.384: INFO: stderr: "" Jan 15 02:40:08.384: INFO: stdout: "true" Jan 15 02:40:08.384: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:40:08.912: INFO: stderr: "" Jan 15 02:40:08.912: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:40:08.912: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:40:09.071: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:40:09.072: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:40:09.072: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:40:09.072: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:09.592: INFO: stderr: "" Jan 15 02:40:09.592: INFO: stdout: "" Jan 15 02:40:09.592: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:40:14.593: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:40:15.140: INFO: stderr: "" Jan 15 02:40:15.140: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:40:15.140: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:15.666: INFO: stderr: "" Jan 15 02:40:15.667: INFO: stdout: "true" Jan 15 02:40:15.667: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:40:16.185: INFO: stderr: "" Jan 15 02:40:16.185: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:40:16.185: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:40:16.337: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:40:16.337: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:40:16.337: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:40:16.337: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:16.855: INFO: stderr: "" Jan 15 02:40:16.855: INFO: stdout: "" Jan 15 02:40:16.855: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:40:21.855: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:40:22.381: INFO: stderr: "" Jan 15 02:40:22.382: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:40:22.382: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:22.905: INFO: stderr: "" Jan 15 02:40:22.905: INFO: stdout: "true" Jan 15 02:40:22.905: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:40:23.442: INFO: stderr: "" Jan 15 02:40:23.442: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:40:23.442: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:40:23.594: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:40:23.594: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:40:23.594: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:40:23.594: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:24.112: INFO: stderr: "" Jan 15 02:40:24.112: INFO: stdout: "" Jan 15 02:40:24.112: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:40:29.113: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:40:29.631: INFO: stderr: "" Jan 15 02:40:29.631: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:40:29.631: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:30.154: INFO: stderr: "" Jan 15 02:40:30.154: INFO: stdout: "true" Jan 15 02:40:30.154: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:40:30.695: INFO: stderr: "" Jan 15 02:40:30.695: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:40:30.695: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:40:30.848: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:40:30.849: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:40:30.849: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:40:30.849: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:31.365: INFO: stderr: "" Jan 15 02:40:31.365: INFO: stdout: "" Jan 15 02:40:31.365: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:40:36.366: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:40:37.032: INFO: stderr: "" Jan 15 02:40:37.032: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:40:37.032: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:37.543: INFO: stderr: "" Jan 15 02:40:37.544: INFO: stdout: "true" Jan 15 02:40:37.544: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:40:38.059: INFO: stderr: "" Jan 15 02:40:38.059: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:40:38.059: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:40:38.211: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:40:38.211: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:40:38.211: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:40:38.211: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:38.729: INFO: stderr: "" Jan 15 02:40:38.729: INFO: stdout: "" Jan 15 02:40:38.729: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:40:43.733: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:40:44.415: INFO: stderr: "" Jan 15 02:40:44.415: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:40:44.415: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:44.933: INFO: stderr: "" Jan 15 02:40:44.933: INFO: stdout: "true" Jan 15 02:40:44.933: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:40:45.463: INFO: stderr: "" Jan 15 02:40:45.463: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:40:45.463: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:40:45.616: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:40:45.616: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:40:45.616: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:40:45.616: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:46.131: INFO: stderr: "" Jan 15 02:40:46.131: INFO: stdout: "" Jan 15 02:40:46.131: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:40:51.131: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:40:51.657: INFO: stderr: "" Jan 15 02:40:51.657: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:40:51.657: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:52.186: INFO: stderr: "" Jan 15 02:40:52.186: INFO: stdout: "true" Jan 15 02:40:52.186: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:40:52.701: INFO: stderr: "" Jan 15 02:40:52.701: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:40:52.701: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:40:52.853: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:40:52.853: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:40:52.853: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:40:52.853: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:53.368: INFO: stderr: "" Jan 15 02:40:53.368: INFO: stdout: "" Jan 15 02:40:53.368: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:40:58.368: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:40:58.890: INFO: stderr: "" Jan 15 02:40:58.890: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:40:58.890: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:40:59.406: INFO: stderr: "" Jan 15 02:40:59.406: INFO: stdout: "true" Jan 15 02:40:59.406: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:40:59.927: INFO: stderr: "" Jan 15 02:40:59.927: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:40:59.927: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:41:00.080: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:41:00.080: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:41:00.080: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:41:00.080: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:00.612: INFO: stderr: "" Jan 15 02:41:00.612: INFO: stdout: "" Jan 15 02:41:00.612: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:41:05.612: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:41:06.281: INFO: stderr: "" Jan 15 02:41:06.281: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:41:06.282: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:06.816: INFO: stderr: "" Jan 15 02:41:06.817: INFO: stdout: "true" Jan 15 02:41:06.817: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:41:07.336: INFO: stderr: "" Jan 15 02:41:07.336: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:41:07.336: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:41:07.488: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:41:07.488: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:41:07.488: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:41:07.488: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:08.022: INFO: stderr: "" Jan 15 02:41:08.022: INFO: stdout: "" Jan 15 02:41:08.022: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:41:13.025: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:41:13.545: INFO: stderr: "" Jan 15 02:41:13.545: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:41:13.545: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:14.062: INFO: stderr: "" Jan 15 02:41:14.062: INFO: stdout: "true" Jan 15 02:41:14.062: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:41:14.607: INFO: stderr: "" Jan 15 02:41:14.607: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:41:14.607: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:41:14.759: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:41:14.759: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:41:14.759: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:41:14.759: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:15.272: INFO: stderr: "" Jan 15 02:41:15.272: INFO: stdout: "" Jan 15 02:41:15.272: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:41:20.272: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:41:20.798: INFO: stderr: "" Jan 15 02:41:20.798: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:41:20.798: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:21.319: INFO: stderr: "" Jan 15 02:41:21.319: INFO: stdout: "true" Jan 15 02:41:21.319: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:41:21.843: INFO: stderr: "" Jan 15 02:41:21.843: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:41:21.843: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:41:21.995: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:41:21.995: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:41:21.995: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:41:21.995: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:22.509: INFO: stderr: "" Jan 15 02:41:22.509: INFO: stdout: "" Jan 15 02:41:22.509: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:41:27.511: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:41:28.041: INFO: stderr: "" Jan 15 02:41:28.041: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:41:28.041: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:28.589: INFO: stderr: "" Jan 15 02:41:28.589: INFO: stdout: "true" Jan 15 02:41:28.589: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:41:29.123: INFO: stderr: "" Jan 15 02:41:29.123: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:41:29.123: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:41:29.275: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:41:29.275: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:41:29.275: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:41:29.275: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:29.788: INFO: stderr: "" Jan 15 02:41:29.788: INFO: stdout: "" Jan 15 02:41:29.788: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:41:34.789: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:41:35.324: INFO: stderr: "" Jan 15 02:41:35.324: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:41:35.324: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:35.864: INFO: stderr: "" Jan 15 02:41:35.864: INFO: stdout: "true" Jan 15 02:41:35.864: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:41:36.380: INFO: stderr: "" Jan 15 02:41:36.380: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:41:36.380: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:41:36.532: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:41:36.533: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:41:36.533: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:41:36.533: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:37.069: INFO: stderr: "" Jan 15 02:41:37.069: INFO: stdout: "" Jan 15 02:41:37.069: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:41:42.069: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:41:42.604: INFO: stderr: "" Jan 15 02:41:42.604: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:41:42.604: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:43.116: INFO: stderr: "" Jan 15 02:41:43.116: INFO: stdout: "true" Jan 15 02:41:43.116: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:41:43.632: INFO: stderr: "" Jan 15 02:41:43.633: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:41:43.633: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:41:43.784: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:41:43.784: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:41:43.784: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:41:43.784: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:44.302: INFO: stderr: "" Jan 15 02:41:44.302: INFO: stdout: "" Jan 15 02:41:44.302: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:41:49.303: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:41:49.871: INFO: stderr: "" Jan 15 02:41:49.872: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:41:49.872: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:50.395: INFO: stderr: "" Jan 15 02:41:50.395: INFO: stdout: "true" Jan 15 02:41:50.395: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:41:50.911: INFO: stderr: "" Jan 15 02:41:50.911: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:41:50.911: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:41:51.063: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:41:51.063: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:41:51.063: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:41:51.063: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:51.578: INFO: stderr: "" Jan 15 02:41:51.578: INFO: stdout: "" Jan 15 02:41:51.578: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:41:56.581: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:41:57.102: INFO: stderr: "" Jan 15 02:41:57.102: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:41:57.103: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:57.620: INFO: stderr: "" Jan 15 02:41:57.620: INFO: stdout: "true" Jan 15 02:41:57.620: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:41:58.151: INFO: stderr: "" Jan 15 02:41:58.151: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:41:58.151: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:41:58.303: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:41:58.303: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:41:58.303: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:41:58.303: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:41:58.825: INFO: stderr: "" Jan 15 02:41:58.825: INFO: stdout: "" Jan 15 02:41:58.825: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:42:03.825: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:42:04.354: INFO: stderr: "" Jan 15 02:42:04.354: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:42:04.354: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:04.878: INFO: stderr: "" Jan 15 02:42:04.878: INFO: stdout: "true" Jan 15 02:42:04.878: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:42:05.397: INFO: stderr: "" Jan 15 02:42:05.397: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:42:05.397: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:42:05.549: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:42:05.549: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:42:05.549: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:42:05.549: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:06.081: INFO: stderr: "" Jan 15 02:42:06.081: INFO: stdout: "" Jan 15 02:42:06.081: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:42:11.082: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:42:11.614: INFO: stderr: "" Jan 15 02:42:11.614: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:42:11.614: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:12.132: INFO: stderr: "" Jan 15 02:42:12.132: INFO: stdout: "true" Jan 15 02:42:12.132: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:42:12.657: INFO: stderr: "" Jan 15 02:42:12.657: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:42:12.657: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:42:12.809: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:42:12.809: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:42:12.809: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:42:12.809: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:13.328: INFO: stderr: "" Jan 15 02:42:13.328: INFO: stdout: "" Jan 15 02:42:13.328: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:42:18.329: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:42:18.857: INFO: stderr: "" Jan 15 02:42:18.857: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:42:18.858: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:19.376: INFO: stderr: "" Jan 15 02:42:19.376: INFO: stdout: "true" Jan 15 02:42:19.376: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:42:19.919: INFO: stderr: "" Jan 15 02:42:19.919: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:42:19.919: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:42:20.071: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:42:20.071: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:42:20.071: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:42:20.071: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:20.589: INFO: stderr: "" Jan 15 02:42:20.589: INFO: stdout: "" Jan 15 02:42:20.589: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:42:25.591: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:42:26.126: INFO: stderr: "" Jan 15 02:42:26.126: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:42:26.126: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:26.663: INFO: stderr: "" Jan 15 02:42:26.663: INFO: stdout: "true" Jan 15 02:42:26.663: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:42:27.202: INFO: stderr: "" Jan 15 02:42:27.202: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:42:27.202: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:42:27.356: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:42:27.357: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:42:27.357: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:42:27.357: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:27.879: INFO: stderr: "" Jan 15 02:42:27.879: INFO: stdout: "" Jan 15 02:42:27.879: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:42:32.882: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:42:33.404: INFO: stderr: "" Jan 15 02:42:33.404: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:42:33.404: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:33.942: INFO: stderr: "" Jan 15 02:42:33.942: INFO: stdout: "true" Jan 15 02:42:33.942: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:42:34.461: INFO: stderr: "" Jan 15 02:42:34.461: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:42:34.461: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:42:34.613: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:42:34.613: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:42:34.613: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:42:34.613: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:35.139: INFO: stderr: "" Jan 15 02:42:35.139: INFO: stdout: "" Jan 15 02:42:35.139: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:42:40.140: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:42:40.684: INFO: stderr: "" Jan 15 02:42:40.684: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:42:40.685: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:41.217: INFO: stderr: "" Jan 15 02:42:41.217: INFO: stdout: "true" Jan 15 02:42:41.217: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:42:41.740: INFO: stderr: "" Jan 15 02:42:41.740: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:42:41.740: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:42:41.892: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:42:41.892: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:42:41.892: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:42:41.892: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:42.421: INFO: stderr: "" Jan 15 02:42:42.421: INFO: stdout: "" Jan 15 02:42:42.421: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:42:47.423: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:42:47.971: INFO: stderr: "" Jan 15 02:42:47.971: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:42:47.971: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:48.490: INFO: stderr: "" Jan 15 02:42:48.490: INFO: stdout: "true" Jan 15 02:42:48.490: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:42:49.009: INFO: stderr: "" Jan 15 02:42:49.009: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:42:49.009: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:42:49.161: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:42:49.161: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:42:49.161: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:42:49.161: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:49.681: INFO: stderr: "" Jan 15 02:42:49.681: INFO: stdout: "" Jan 15 02:42:49.681: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:42:54.682: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:42:55.230: INFO: stderr: "" Jan 15 02:42:55.230: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:42:55.230: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:55.749: INFO: stderr: "" Jan 15 02:42:55.749: INFO: stdout: "true" Jan 15 02:42:55.749: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:42:56.292: INFO: stderr: "" Jan 15 02:42:56.292: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:42:56.292: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:42:56.444: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:42:56.444: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:42:56.445: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:42:56.445: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:42:56.983: INFO: stderr: "" Jan 15 02:42:56.983: INFO: stdout: "" Jan 15 02:42:56.983: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:43:01.985: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:43:02.513: INFO: stderr: "" Jan 15 02:43:02.513: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:43:02.513: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:03.059: INFO: stderr: "" Jan 15 02:43:03.059: INFO: stdout: "true" Jan 15 02:43:03.059: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:43:03.580: INFO: stderr: "" Jan 15 02:43:03.580: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:43:03.580: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:43:03.732: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:43:03.733: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:43:03.733: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:43:03.733: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:04.251: INFO: stderr: "" Jan 15 02:43:04.251: INFO: stdout: "" Jan 15 02:43:04.251: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:43:09.251: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:43:09.774: INFO: stderr: "" Jan 15 02:43:09.774: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:43:09.774: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:10.326: INFO: stderr: "" Jan 15 02:43:10.326: INFO: stdout: "true" Jan 15 02:43:10.326: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:43:10.845: INFO: stderr: "" Jan 15 02:43:10.845: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:43:10.845: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:43:10.996: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:43:10.996: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:43:10.996: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:43:10.996: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:11.515: INFO: stderr: "" Jan 15 02:43:11.515: INFO: stdout: "" Jan 15 02:43:11.515: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:43:16.516: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:43:17.070: INFO: stderr: "" Jan 15 02:43:17.070: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:43:17.070: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:17.593: INFO: stderr: "" Jan 15 02:43:17.593: INFO: stdout: "true" Jan 15 02:43:17.593: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:43:18.156: INFO: stderr: "" Jan 15 02:43:18.156: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:43:18.156: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:43:18.308: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:43:18.308: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:43:18.308: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:43:18.308: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:18.835: INFO: stderr: "" Jan 15 02:43:18.835: INFO: stdout: "" Jan 15 02:43:18.835: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:43:23.836: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:43:24.359: INFO: stderr: "" Jan 15 02:43:24.359: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:43:24.359: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:24.906: INFO: stderr: "" Jan 15 02:43:24.906: INFO: stdout: "true" Jan 15 02:43:24.906: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:43:25.429: INFO: stderr: "" Jan 15 02:43:25.430: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:43:25.430: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:43:25.581: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:43:25.581: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:43:25.581: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:43:25.581: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:26.103: INFO: stderr: "" Jan 15 02:43:26.103: INFO: stdout: "" Jan 15 02:43:26.103: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:43:31.104: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:43:31.784: INFO: stderr: "" Jan 15 02:43:31.784: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:43:31.784: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:32.298: INFO: stderr: "" Jan 15 02:43:32.299: INFO: stdout: "true" Jan 15 02:43:32.299: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:43:32.818: INFO: stderr: "" Jan 15 02:43:32.819: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:43:32.819: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:43:32.970: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:43:32.970: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:43:32.970: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:43:32.970: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:33.493: INFO: stderr: "" Jan 15 02:43:33.493: INFO: stdout: "" Jan 15 02:43:33.493: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:43:38.494: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:43:39.022: INFO: stderr: "" Jan 15 02:43:39.022: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:43:39.022: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:39.548: INFO: stderr: "" Jan 15 02:43:39.548: INFO: stdout: "true" Jan 15 02:43:39.548: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:43:40.069: INFO: stderr: "" Jan 15 02:43:40.069: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:43:40.069: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:43:40.221: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:43:40.221: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:43:40.221: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:43:40.221: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:40.757: INFO: stderr: "" Jan 15 02:43:40.757: INFO: stdout: "" Jan 15 02:43:40.757: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:43:45.758: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:43:46.293: INFO: stderr: "" Jan 15 02:43:46.293: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:43:46.293: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:46.815: INFO: stderr: "" Jan 15 02:43:46.815: INFO: stdout: "true" Jan 15 02:43:46.815: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:43:47.369: INFO: stderr: "" Jan 15 02:43:47.369: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:43:47.369: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:43:47.521: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:43:47.521: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:43:47.521: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:43:47.521: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:48.118: INFO: stderr: "" Jan 15 02:43:48.118: INFO: stdout: "" Jan 15 02:43:48.118: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:43:53.121: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:43:53.673: INFO: stderr: "" Jan 15 02:43:53.673: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:43:53.673: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:54.219: INFO: stderr: "" Jan 15 02:43:54.219: INFO: stdout: "true" Jan 15 02:43:54.219: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:43:54.749: INFO: stderr: "" Jan 15 02:43:54.749: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:43:54.749: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:43:54.900: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:43:54.900: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:43:54.900: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:43:54.900: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:43:55.436: INFO: stderr: "" Jan 15 02:43:55.436: INFO: stdout: "" Jan 15 02:43:55.436: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:44:00.436: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:44:00.981: INFO: stderr: "" Jan 15 02:44:00.981: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:44:00.981: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:01.516: INFO: stderr: "" Jan 15 02:44:01.516: INFO: stdout: "true" Jan 15 02:44:01.516: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:44:02.049: INFO: stderr: "" Jan 15 02:44:02.049: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:44:02.049: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:44:02.201: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:44:02.201: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:44:02.201: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:44:02.201: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:02.717: INFO: stderr: "" Jan 15 02:44:02.717: INFO: stdout: "" Jan 15 02:44:02.717: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:44:07.717: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:44:08.252: INFO: stderr: "" Jan 15 02:44:08.252: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:44:08.252: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:08.773: INFO: stderr: "" Jan 15 02:44:08.773: INFO: stdout: "true" Jan 15 02:44:08.773: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:44:09.292: INFO: stderr: "" Jan 15 02:44:09.292: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:44:09.294: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:44:09.446: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:44:09.446: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:44:09.446: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:44:09.446: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:09.963: INFO: stderr: "" Jan 15 02:44:09.963: INFO: stdout: "" Jan 15 02:44:09.963: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:44:14.964: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:44:15.496: INFO: stderr: "" Jan 15 02:44:15.496: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:44:15.496: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:16.025: INFO: stderr: "" Jan 15 02:44:16.026: INFO: stdout: "true" Jan 15 02:44:16.026: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:44:16.551: INFO: stderr: "" Jan 15 02:44:16.551: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:44:16.551: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:44:16.703: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:44:16.703: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:44:16.703: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:44:16.703: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:17.224: INFO: stderr: "" Jan 15 02:44:17.224: INFO: stdout: "" Jan 15 02:44:17.224: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:44:22.227: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:44:22.774: INFO: stderr: "" Jan 15 02:44:22.774: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:44:22.774: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:23.309: INFO: stderr: "" Jan 15 02:44:23.309: INFO: stdout: "true" Jan 15 02:44:23.309: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:44:23.848: INFO: stderr: "" Jan 15 02:44:23.848: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:44:23.848: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:44:24.000: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:44:24.000: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:44:24.000: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:44:24.000: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:24.520: INFO: stderr: "" Jan 15 02:44:24.520: INFO: stdout: "" Jan 15 02:44:24.520: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:44:29.521: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:44:30.060: INFO: stderr: "" Jan 15 02:44:30.060: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:44:30.062: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:30.577: INFO: stderr: "" Jan 15 02:44:30.577: INFO: stdout: "true" Jan 15 02:44:30.577: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:44:31.097: INFO: stderr: "" Jan 15 02:44:31.098: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:44:31.098: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:44:31.249: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:44:31.249: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:44:31.249: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:44:31.249: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:31.772: INFO: stderr: "" Jan 15 02:44:31.772: INFO: stdout: "" Jan 15 02:44:31.772: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:44:36.774: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:44:37.470: INFO: stderr: "" Jan 15 02:44:37.470: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:44:37.470: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:38.024: INFO: stderr: "" Jan 15 02:44:38.024: INFO: stdout: "true" Jan 15 02:44:38.024: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:44:38.562: INFO: stderr: "" Jan 15 02:44:38.562: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:44:38.562: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:44:38.714: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:44:38.714: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:44:38.714: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:44:38.714: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:39.259: INFO: stderr: "" Jan 15 02:44:39.259: INFO: stdout: "" Jan 15 02:44:39.259: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:44:44.260: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:44:44.826: INFO: stderr: "" Jan 15 02:44:44.826: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:44:44.826: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:45.356: INFO: stderr: "" Jan 15 02:44:45.358: INFO: stdout: "true" Jan 15 02:44:45.358: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:44:45.881: INFO: stderr: "" Jan 15 02:44:45.881: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:44:45.881: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:44:46.033: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:44:46.033: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:44:46.033: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:44:46.033: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:46.551: INFO: stderr: "" Jan 15 02:44:46.551: INFO: stdout: "" Jan 15 02:44:46.551: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:44:51.553: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:44:52.078: INFO: stderr: "" Jan 15 02:44:52.078: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:44:52.078: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:52.627: INFO: stderr: "" Jan 15 02:44:52.627: INFO: stdout: "true" Jan 15 02:44:52.627: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:44:53.150: INFO: stderr: "" Jan 15 02:44:53.150: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:44:53.150: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:44:53.301: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:44:53.301: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:44:53.301: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:44:53.301: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:53.824: INFO: stderr: "" Jan 15 02:44:53.824: INFO: stdout: "" Jan 15 02:44:53.824: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:44:58.825: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:44:59.353: INFO: stderr: "" Jan 15 02:44:59.353: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:44:59.353: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:44:59.896: INFO: stderr: "" Jan 15 02:44:59.896: INFO: stdout: "true" Jan 15 02:44:59.896: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:45:00.443: INFO: stderr: "" Jan 15 02:45:00.443: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:45:00.443: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:45:00.595: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:45:00.595: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:45:00.595: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:45:00.595: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:45:01.125: INFO: stderr: "" Jan 15 02:45:01.125: INFO: stdout: "" Jan 15 02:45:01.125: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:45:06.127: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 15 02:45:06.804: INFO: stderr: "" Jan 15 02:45:06.804: INFO: stdout: "update-demo-nautilus-2rsjf update-demo-nautilus-ll9hv " Jan 15 02:45:06.804: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:45:07.328: INFO: stderr: "" Jan 15 02:45:07.328: INFO: stdout: "true" Jan 15 02:45:07.328: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-2rsjf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 15 02:45:07.868: INFO: stderr: "" Jan 15 02:45:07.868: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:45:07.868: INFO: validating pod update-demo-nautilus-2rsjf Jan 15 02:45:08.020: INFO: got data: { "image": "nautilus.jpg" } Jan 15 02:45:08.020: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 15 02:45:08.020: INFO: update-demo-nautilus-2rsjf is verified up and running Jan 15 02:45:08.020: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods update-demo-nautilus-ll9hv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 15 02:45:08.539: INFO: stderr: "" Jan 15 02:45:08.539: INFO: stdout: "" Jan 15 02:45:08.539: INFO: update-demo-nautilus-ll9hv is created but not running Jan 15 02:45:13.541: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:335 +0x505 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008af040, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: using delete to clean up resources Jan 15 02:45:13.542: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 delete --grace-period=0 --force -f -' Jan 15 02:45:14.213: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 15 02:45:14.213: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 15 02:45:14.213: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get rc,svc -l name=update-demo --no-headers' Jan 15 02:45:14.883: INFO: stderr: "No resources found in kubectl-7126 namespace.\n" Jan 15 02:45:14.883: INFO: stdout: "" Jan 15 02:45:14.884: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/5b69b327-947a-11ed-bd5d-72eebb4d772a/kubectl --server=https://api.e2e-e2e-kops-grid-cilium-eni-deb10-k23-docker.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=kubectl-7126 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 15 02:45:15.425: INFO: stderr: "" Jan 15 02:45:15.425: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "kubectl-7126". �[1mSTEP�[0m: Found 27 events. Jan 15 02:45:15.578: INFO: At 2023-01-15 02:39:48 +0000 UTC - event for update-demo-nautilus: {replication-controller } SuccessfulCreate: Created pod: update-demo-nautilus-2rsjf Jan 15 02:45:15.579: INFO: At 2023-01-15 02:39:48 +0000 UTC - event for update-demo-nautilus: {replication-controller } SuccessfulCreate: Created pod: update-demo-nautilus-mtr8s Jan 15 02:45:15.579: INFO: At 2023-01-15 02:39:48 +0000 UTC - event for update-demo-nautilus-2rsjf: {default-scheduler } Scheduled: Successfully assigned kubectl-7126/update-demo-nautilus-2rsjf to ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:45:15.579: INFO: At 2023-01-15 02:39:48 +0000 UTC - event for update-demo-nautilus-mtr8s: {default-scheduler } Scheduled: Successfully assigned kubectl-7126/update-demo-nautilus-mtr8s to ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:45:15.579: INFO: At 2023-01-15 02:39:49 +0000 UTC - event for update-demo-nautilus-2rsjf: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/nautilus:1.5" already present on machine Jan 15 02:45:15.579: INFO: At 2023-01-15 02:39:49 +0000 UTC - event for update-demo-nautilus-mtr8s: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 15 02:45:15.579: INFO: At 2023-01-15 02:39:50 +0000 UTC - event for update-demo-nautilus-2rsjf: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} Started: Started container update-demo Jan 15 02:45:15.579: INFO: At 2023-01-15 02:39:50 +0000 UTC - event for update-demo-nautilus-2rsjf: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} Created: Created container update-demo Jan 15 02:45:15.579: INFO: At 2023-01-15 02:39:55 +0000 UTC - event for update-demo-nautilus-mtr8s: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/nautilus:1.5" in 5.398982333s Jan 15 02:45:15.579: INFO: At 2023-01-15 02:39:55 +0000 UTC - event for update-demo-nautilus-mtr8s: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} Started: Started container update-demo Jan 15 02:45:15.579: INFO: At 2023-01-15 02:39:55 +0000 UTC - event for update-demo-nautilus-mtr8s: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} Created: Created container update-demo Jan 15 02:45:15.579: INFO: At 2023-01-15 02:39:57 +0000 UTC - event for update-demo-nautilus: {replication-controller } SuccessfulDelete: Deleted pod: update-demo-nautilus-mtr8s Jan 15 02:45:15.579: INFO: At 2023-01-15 02:39:59 +0000 UTC - event for update-demo-nautilus-mtr8s: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} Killing: Stopping container update-demo Jan 15 02:45:15.579: INFO: At 2023-01-15 02:40:07 +0000 UTC - event for update-demo-nautilus: {replication-controller } SuccessfulCreate: Created pod: update-demo-nautilus-ll9hv Jan 15 02:45:15.579: INFO: At 2023-01-15 02:40:07 +0000 UTC - event for update-demo-nautilus-ll9hv: {default-scheduler } Scheduled: Successfully assigned kubectl-7126/update-demo-nautilus-ll9hv to ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:45:15.579: INFO: At 2023-01-15 02:40:11 +0000 UTC - event for update-demo-nautilus-ll9hv: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c626f35c1a4d8e3922e0c9e125d355b0bb79b248428c30b3668517c5cc6e648e" network for pod "update-demo-nautilus-ll9hv": networkPlugin cni failed to set up pod "update-demo-nautilus-ll9hv_kubectl-7126" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:15.579: INFO: At 2023-01-15 02:40:12 +0000 UTC - event for update-demo-nautilus-ll9hv: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 15 02:45:15.579: INFO: At 2023-01-15 02:40:15 +0000 UTC - event for update-demo-nautilus-ll9hv: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "fdb732b5c9052e914dd176adeaf6760f62a93cc6d9f9721e15ea7bb6fdbb7b27" network for pod "update-demo-nautilus-ll9hv": networkPlugin cni failed to set up pod "update-demo-nautilus-ll9hv_kubectl-7126" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:15.579: INFO: At 2023-01-15 02:40:20 +0000 UTC - event for update-demo-nautilus-ll9hv: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "4d795f0c2048223c4abcb53134f9cf62c80994d16f1326b6b6dab5b2147535b3" network for pod "update-demo-nautilus-ll9hv": networkPlugin cni failed to set up pod "update-demo-nautilus-ll9hv_kubectl-7126" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:15.579: INFO: At 2023-01-15 02:40:26 +0000 UTC - event for update-demo-nautilus-ll9hv: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8ebf075bd68665a39d7dc12bb0ba8c6a76b2cb45de35369ae1b67520c69040e1" network for pod "update-demo-nautilus-ll9hv": networkPlugin cni failed to set up pod "update-demo-nautilus-ll9hv_kubectl-7126" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:15.579: INFO: At 2023-01-15 02:40:29 +0000 UTC - event for update-demo-nautilus-ll9hv: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0d3ed3de3e80b5bcfe7f26581f790615adbe3eae4ef3d95b928384c4bb6b3133" network for pod "update-demo-nautilus-ll9hv": networkPlugin cni failed to set up pod "update-demo-nautilus-ll9hv_kubectl-7126" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:15.579: INFO: At 2023-01-15 02:40:32 +0000 UTC - event for update-demo-nautilus-ll9hv: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "03f9079c6265a742d289a07dbe69714478653beeab302e15654ad3c5ec6b8631" network for pod "update-demo-nautilus-ll9hv": networkPlugin cni failed to set up pod "update-demo-nautilus-ll9hv_kubectl-7126" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:15.579: INFO: At 2023-01-15 02:40:34 +0000 UTC - event for update-demo-nautilus-ll9hv: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "abb373f0c119f4f22c8cbc5a4eafe965c20c2657c3706bff521cdb8a3093b4ef" network for pod "update-demo-nautilus-ll9hv": networkPlugin cni failed to set up pod "update-demo-nautilus-ll9hv_kubectl-7126" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:15.579: INFO: At 2023-01-15 02:40:37 +0000 UTC - event for update-demo-nautilus-ll9hv: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "406fb2684430586985ede0ccd74220035e6e5103baa59781e3d1287ca9f0e001" network for pod "update-demo-nautilus-ll9hv": networkPlugin cni failed to set up pod "update-demo-nautilus-ll9hv_kubectl-7126" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:15.579: INFO: At 2023-01-15 02:40:42 +0000 UTC - event for update-demo-nautilus-ll9hv: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a9aeff4740ae7c2a564d211026912c20b72612ad43f8bbe246e0b046ec6cc63f" network for pod "update-demo-nautilus-ll9hv": networkPlugin cni failed to set up pod "update-demo-nautilus-ll9hv_kubectl-7126" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:15.579: INFO: At 2023-01-15 02:41:03 +0000 UTC - event for update-demo-nautilus-ll9hv: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9e9e0bf8bfde0d6f2b3b6bf93bd8049b3a39488d9c4e13bd0301a59928414ff2" network for pod "update-demo-nautilus-ll9hv": networkPlugin cni failed to set up pod "update-demo-nautilus-ll9hv_kubectl-7126" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:45:15.579: INFO: At 2023-01-15 02:45:14 +0000 UTC - event for update-demo-nautilus-2rsjf: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} Killing: Stopping container update-demo Jan 15 02:45:15.730: INFO: POD NODE PHASE GRACE CONDITIONS Jan 15 02:45:15.730: INFO: update-demo-nautilus-ll9hv ip-172-20-43-202.ap-northeast-1.compute.internal Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:40:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:40:07 +0000 UTC ContainersNotReady containers with unready status: [update-demo]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:40:07 +0000 UTC ContainersNotReady containers with unready status: [update-demo]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:40:07 +0000 UTC }] Jan 15 02:45:15.730: INFO: Jan 15 02:45:15.883: INFO: Unable to fetch kubectl-7126/update-demo-nautilus-ll9hv/update-demo logs: the server rejected our request for an unknown reason (get pods update-demo-nautilus-ll9hv) Jan 15 02:45:16.036: INFO: Logging node info for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:45:16.187: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-43-202.ap-northeast-1.compute.internal bc0ee9cc-d624-47c0-9b67-04d40afa9f71 46408 0 2023-01-15 02:20:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-43-202.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-43-202.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7187":"csi-mock-csi-mock-volumes-7187","ebs.csi.aws.com":"i-011b6dcd6ec6449ff"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-15 02:40:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {kube-controller-manager Update v1 2023-01-15 02:42:39 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-011b6dcd6ec6449ff,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078206976 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973349376 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:41:58 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:41:58 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:41:58 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:41:58 +0000 UTC,LastTransitionTime:2023-01-15 02:20:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.43.202,},NodeAddress{Type:ExternalIP,Address:13.231.182.149,},NodeAddress{Type:Hostname,Address:ip-172-20-43-202.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-43-202.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-231-182-149.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec22781cfe518dfd12a3a957bb6e997a,SystemUUID:ec22781c-fe51-8dfd-12a3-a957bb6e997a,BootID:949cecd1-b820-47c2-afb4-e8715b8e9ccd,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:42321438,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-04fd6a24339ad94ba,DevicePath:,},},Config:nil,},} Jan 15 02:45:16.190: INFO: Logging kubelet events for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:45:16.349: INFO: Logging pods the kubelet thinks is on node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:45:16.659: INFO: netserver-0 started at 2023-01-15 02:40:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container webserver ready: true, restart count 0 Jan 15 02:45:16.659: INFO: ebs-csi-node-bwkxc started at 2023-01-15 02:20:28 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:16.659: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:45:16.659: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:45:16.659: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:45:16.659: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8sg5j started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:16.659: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-rjqfv started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:16.659: INFO: test-orphan-deployment-5d9fdcc779-bqvmn started at 2023-01-15 02:40:24 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container httpd ready: false, restart count 0 Jan 15 02:45:16.659: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zg7rx started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:16.659: INFO: netserver-0 started at 2023-01-15 02:39:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container webserver ready: true, restart count 0 Jan 15 02:45:16.659: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-kspn8 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:16.659: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-gx8nq started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:16.659: INFO: netserver-0 started at 2023-01-15 02:42:25 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:16.659: INFO: csi-mockplugin-0 started at 2023-01-15 02:40:08 +0000 UTC (0+4 container statuses recorded) Jan 15 02:45:16.659: INFO: Container busybox ready: true, restart count 0 Jan 15 02:45:16.659: INFO: Container csi-provisioner ready: true, restart count 0 Jan 15 02:45:16.659: INFO: Container driver-registrar ready: true, restart count 0 Jan 15 02:45:16.659: INFO: Container mock ready: true, restart count 0 Jan 15 02:45:16.659: INFO: service-proxy-disabled-w9tn2 started at 2023-01-15 02:43:29 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container service-proxy-disabled ready: false, restart count 0 Jan 15 02:45:16.659: INFO: pvc-volume-tester-94cwc started at 2023-01-15 02:41:52 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container volume-tester ready: false, restart count 0 Jan 15 02:45:16.659: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-5fshl started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:16.659: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-5qrcs started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:45:16.659: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-94mx6 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:16.659: INFO: cilium-nwjl7 started at 2023-01-15 02:20:28 +0000 UTC (1+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:45:16.659: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:45:16.659: INFO: update-demo-nautilus-ll9hv started at 2023-01-15 02:40:07 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container update-demo ready: false, restart count 0 Jan 15 02:45:16.659: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-jpthm started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:16.659: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-clxvc started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:16.659: INFO: pod-aeb2b274-7ab8-476e-89d5-1b68c36dd11a started at 2023-01-15 02:42:36 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:45:16.659: INFO: pod-subpath-test-inlinevolume-mv6s started at 2023-01-15 02:42:42 +0000 UTC (1+1 container statuses recorded) Jan 15 02:45:16.659: INFO: Init container init-volume-inlinevolume-mv6s ready: false, restart count 0 Jan 15 02:45:16.659: INFO: Container test-container-subpath-inlinevolume-mv6s ready: false, restart count 0 Jan 15 02:45:17.197: INFO: Latency metrics for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:45:17.198: INFO: Logging node info for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:45:17.349: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-47-130.ap-northeast-1.compute.internal ea498620-e864-4902-9d30-d8389c23e560 46002 0 2023-01-15 02:18:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-47-130.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-07af92bd9e489a81a"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-15 02:18:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2023-01-15 02:19:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2023-01-15 02:19:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:19:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-15 02:21:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-07af92bd9e489a81a,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3918811136 0} {<nil>} 3826964Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3813953536 0} {<nil>} 3724564Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:41:44 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:41:44 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:41:44 +0000 UTC,LastTransitionTime:2023-01-15 02:18:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:41:44 +0000 UTC,LastTransitionTime:2023-01-15 02:19:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.47.130,},NodeAddress{Type:ExternalIP,Address:52.193.20.80,},NodeAddress{Type:Hostname,Address:ip-172-20-47-130.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-47-130.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-52-193-20-80.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec20ce72a358a22f56cbd92c76be5dbf,SystemUUID:ec20ce72-a358-a22f-56cb-d92c76be5dbf,BootID:1bf7fead-d48b-4969-8efe-375fd816430f,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:66a453db625abb268f4b3bbefc5a34a171d81e6e8796cecca54cfd71775c77c4 registry.k8s.io/etcdadm/etcd-manager:v3.0.20221209],SizeBytes:662209237,},ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.15 registry.k8s.io/kube-apiserver-amd64:v1.23.15],SizeBytes:135240444,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.15 registry.k8s.io/kube-controller-manager-amd64:v1.23.15],SizeBytes:125046097,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[quay.io/cilium/operator@sha256:a6d24a006a6b92967ac90786b49bc1ac26e5477cf028cd1186efcfc2466484db quay.io/cilium/operator:v1.12.5],SizeBytes:110016956,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.27.0-alpha.1],SizeBytes:105579814,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.27.0-alpha.1],SizeBytes:102633747,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:57736070,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:55464778,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:54848595,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.15 registry.k8s.io/kube-scheduler-amd64:v1.23.15],SizeBytes:53521430,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.27.0-alpha.1],SizeBytes:8787111,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:45:17.349: INFO: Logging kubelet events for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:45:17.503: INFO: Logging pods the kubelet thinks is on node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:45:17.671: INFO: kube-controller-manager-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:17.672: INFO: Container kube-controller-manager ready: true, restart count 2 Jan 15 02:45:17.672: INFO: cilium-2scfb started at 2023-01-15 02:19:20 +0000 UTC (1+1 container statuses recorded) Jan 15 02:45:17.672: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:45:17.672: INFO: Container cilium-agent ready: true, restart count 1 Jan 15 02:45:17.672: INFO: ebs-csi-node-5h6bj started at 2023-01-15 02:19:21 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:17.672: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:45:17.672: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:45:17.672: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:45:17.672: INFO: kops-controller-jfrgz started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:17.672: INFO: Container kops-controller ready: true, restart count 0 Jan 15 02:45:17.672: INFO: cilium-operator-d47487c76-r5k8h started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:17.672: INFO: Container cilium-operator ready: true, restart count 1 Jan 15 02:45:17.672: INFO: ebs-csi-controller-6d664d6f54-7h2vq started at 2023-01-15 02:19:21 +0000 UTC (0+5 container statuses recorded) Jan 15 02:45:17.672: INFO: Container csi-attacher ready: true, restart count 0 Jan 15 02:45:17.672: INFO: Container csi-provisioner ready: true, restart count 0 Jan 15 02:45:17.672: INFO: Container csi-resizer ready: true, restart count 0 Jan 15 02:45:17.672: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:45:17.672: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:45:17.672: INFO: etcd-manager-events-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:17.672: INFO: Container etcd-manager ready: true, restart count 0 Jan 15 02:45:17.672: INFO: etcd-manager-main-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:17.672: INFO: Container etcd-manager ready: true, restart count 0 Jan 15 02:45:17.672: INFO: kube-apiserver-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+2 container statuses recorded) Jan 15 02:45:17.672: INFO: Container healthcheck ready: true, restart count 0 Jan 15 02:45:17.672: INFO: Container kube-apiserver ready: true, restart count 1 Jan 15 02:45:17.672: INFO: kube-scheduler-ip-172-20-47-130.ap-northeast-1.compute.internal started at 2023-01-15 02:18:11 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:17.672: INFO: Container kube-scheduler ready: true, restart count 0 Jan 15 02:45:17.672: INFO: dns-controller-7db8b74b99-p5n7k started at 2023-01-15 02:19:21 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:17.672: INFO: Container dns-controller ready: true, restart count 0 Jan 15 02:45:18.147: INFO: Latency metrics for node ip-172-20-47-130.ap-northeast-1.compute.internal Jan 15 02:45:18.147: INFO: Logging node info for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:45:18.299: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-49-36.ap-northeast-1.compute.internal 26f759b3-6247-49a4-a840-624b157bb011 47006 0 2023-01-15 02:20:28 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-49-36.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-49-36.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0e51c4a94e7d8d635"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kubelet Update v1 2023-01-15 02:39:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status} {e2e.test Update v1 2023-01-15 02:39:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} }]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0e51c4a94e7d8d635,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078206976 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973349376 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:01 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:01 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:01 +0000 UTC,LastTransitionTime:2023-01-15 02:20:28 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:45:01 +0000 UTC,LastTransitionTime:2023-01-15 02:20:49 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.49.36,},NodeAddress{Type:ExternalIP,Address:13.230.104.186,},NodeAddress{Type:Hostname,Address:ip-172-20-49-36.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-49-36.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-230-104-186.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec21c80a557af692a945a70b52b34335,SystemUUID:ec21c80a-557a-f692-a945-a70b52b34335,BootID:382fa58a-a2d1-4a1a-90a9-82a29ddf0315,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:45:18.299: INFO: Logging kubelet events for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:45:18.453: INFO: Logging pods the kubelet thinks is on node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:45:18.622: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-lkqt4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:18.622: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-twdj8 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:18.622: INFO: netserver-1 started at 2023-01-15 02:39:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container webserver ready: true, restart count 0 Jan 15 02:45:18.622: INFO: netserver-1 started at 2023-01-15 02:40:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:18.622: INFO: pod-ed4dba04-9147-4141-a040-7a3a19a166d3 started at 2023-01-15 02:40:37 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:45:18.622: INFO: execpod2nfv6 started at 2023-01-15 02:40:16 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container agnhost-container ready: false, restart count 0 Jan 15 02:45:18.622: INFO: liveness-3943d642-76a2-4688-ab80-3fd44e1c546d started at 2023-01-15 02:40:13 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container agnhost-container ready: true, restart count 1 Jan 15 02:45:18.622: INFO: pod-init-02bbd522-cfbe-4f6f-a59f-4342588923f2 started at 2023-01-15 02:41:11 +0000 UTC (2+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Init container init1 ready: false, restart count 0 Jan 15 02:45:18.622: INFO: Init container init2 ready: false, restart count 0 Jan 15 02:45:18.622: INFO: Container run1 ready: false, restart count 0 Jan 15 02:45:18.622: INFO: hostexec-ip-172-20-49-36.ap-northeast-1.compute.internal-fbxxx started at 2023-01-15 02:40:28 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:45:18.622: INFO: netserver-1 started at 2023-01-15 02:42:25 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:18.622: INFO: csi-hostpathplugin-0 started at 2023-01-15 02:41:47 +0000 UTC (0+7 container statuses recorded) Jan 15 02:45:18.622: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:45:18.622: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:45:18.622: INFO: Container csi-resizer ready: false, restart count 0 Jan 15 02:45:18.622: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 15 02:45:18.622: INFO: Container hostpath ready: false, restart count 0 Jan 15 02:45:18.622: INFO: Container liveness-probe ready: false, restart count 0 Jan 15 02:45:18.622: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 15 02:45:18.622: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-gnpk2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:18.622: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zfxfr started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:18.622: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-z5vgr started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:18.622: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-85xj4 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:18.622: INFO: cilium-fj5wg started at 2023-01-15 02:20:29 +0000 UTC (1+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:45:18.622: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:45:18.622: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ksvxw started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:18.622: INFO: ebs-csi-node-k85ww started at 2023-01-15 02:20:29 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:18.622: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:45:18.622: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:45:18.622: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:45:18.622: INFO: busybox-privileged-true-0b68ae16-fa00-43df-aa4c-47deacec4b1f started at 2023-01-15 02:40:27 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container busybox-privileged-true-0b68ae16-fa00-43df-aa4c-47deacec4b1f ready: false, restart count 0 Jan 15 02:45:18.622: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8nm4v started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:45:18.622: INFO: pod-1702457c-6c51-477b-8840-878626395835 started at 2023-01-15 02:40:51 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container test-container ready: false, restart count 0 Jan 15 02:45:18.622: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-fnkcp started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:18.622: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-2pcvv started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:18.622: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:19.765: INFO: Latency metrics for node ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:45:19.765: INFO: Logging node info for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:45:19.916: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-51-245.ap-northeast-1.compute.internal 29b48a1c-c57d-4410-932e-64e9296beb36 47028 0 2023-01-15 02:20:25 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-51-245.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-51-245.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0c39c90a04ae30573"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-15 02:39:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0c39c90a04ae30573,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:06 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:06 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:45:06 +0000 UTC,LastTransitionTime:2023-01-15 02:20:25 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:45:06 +0000 UTC,LastTransitionTime:2023-01-15 02:20:46 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.51.245,},NodeAddress{Type:ExternalIP,Address:18.183.227.37,},NodeAddress{Type:Hostname,Address:ip-172-20-51-245.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-51-245.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-18-183-227-37.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2a7a3540b1bd030cf21ea473814791,SystemUUID:ec2a7a35-40b1-bd03-0cf2-1ea473814791,BootID:adde01e7-fded-49e3-90e6-8f78589ee04d,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:68d396900aeaa072c1f27289485fdac29834045a6f3ffe369bf389d830ef572d registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.6],SizeBytes:47910609,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:45:19.917: INFO: Logging kubelet events for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:45:20.070: INFO: Logging pods the kubelet thinks is on node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:45:20.233: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-w8xjs started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.233: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:20.234: INFO: cilium-wmrqz started at 2023-01-15 02:20:26 +0000 UTC (1+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:45:20.234: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:45:20.234: INFO: ebs-csi-node-4jd68 started at 2023-01-15 02:20:26 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:20.234: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:45:20.234: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:45:20.234: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:45:20.234: INFO: netserver-2 started at 2023-01-15 02:42:25 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:20.234: INFO: service-proxy-disabled-qkqj4 started at 2023-01-15 02:43:29 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container service-proxy-disabled ready: false, restart count 0 Jan 15 02:45:20.234: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-b5gxq started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:20.234: INFO: netserver-2 started at 2023-01-15 02:40:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:20.234: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4rvct started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:20.234: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-vjzvp started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:45:20.234: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zzf5k started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:20.234: INFO: hostexec-ip-172-20-51-245.ap-northeast-1.compute.internal-hv7d9 started at 2023-01-15 02:43:25 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:45:20.234: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-rwqgd started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:20.234: INFO: coredns-autoscaler-557ccb4c66-2dfbk started at 2023-01-15 02:20:46 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container autoscaler ready: true, restart count 0 Jan 15 02:45:20.234: INFO: pod-1df18e3d-83da-4374-9fdb-7c83a860b73f started at 2023-01-15 02:43:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:45:20.234: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ncnd2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:20.234: INFO: netserver-2 started at 2023-01-15 02:39:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:20.234: INFO: pod-939baa6e-ab17-40de-8869-7bc7ca5c340e started at 2023-01-15 02:41:13 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:45:20.234: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4ngm4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:20.234: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-swpkn started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:20.234: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-wwzg2 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:20.234: INFO: coredns-867df8f45c-586xt started at 2023-01-15 02:20:46 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container coredns ready: true, restart count 0 Jan 15 02:45:20.234: INFO: hostexec-ip-172-20-51-245.ap-northeast-1.compute.internal-svs4t started at 2023-01-15 02:41:09 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:20.234: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:45:20.886: INFO: Latency metrics for node ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:45:20.886: INFO: Logging node info for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:45:21.038: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-62-124.ap-northeast-1.compute.internal c1941ec4-d9e4-4772-a010-70d33526ed92 45120 0 2023-01-15 02:20:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-62-124.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-62-124.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0d77114c1af529cd3"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kubelet Update v1 2023-01-15 02:40:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-0d77114c1af529cd3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078190592 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973332992 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:40:14 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:40:14 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:40:14 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:40:14 +0000 UTC,LastTransitionTime:2023-01-15 02:20:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.62.124,},NodeAddress{Type:ExternalIP,Address:13.115.110.140,},NodeAddress{Type:Hostname,Address:ip-172-20-62-124.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-62-124.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-115-110-140.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2c70e0dbd67c31e6dd0538493d8c3c,SystemUUID:ec2c70e0-dbd6-7c31-e6dd-0538493d8c3c,BootID:981078cd-0f59-4a44-8062-deda147f2986,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c k8s.gcr.io/etcd:3.5.6-0],SizeBytes:299475478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5],SizeBytes:56504541,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:50249337,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 15 02:45:21.038: INFO: Logging kubelet events for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:45:21.192: INFO: Logging pods the kubelet thinks is on node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:45:21.384: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-4kzd6 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:21.384: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8pdcw started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:21.384: INFO: netserver-3 started at 2023-01-15 02:39:58 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container webserver ready: true, restart count 0 Jan 15 02:45:21.384: INFO: csi-mockplugin-0 started at 2023-01-15 02:40:59 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:21.384: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:45:21.384: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:45:21.384: INFO: Container mock ready: false, restart count 0 Jan 15 02:45:21.384: INFO: pod-8ff4935b-10ec-4909-b5d3-90e2c946eb34 started at 2023-01-15 02:41:51 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:45:21.384: INFO: csi-mockplugin-0 started at 2023-01-15 02:42:38 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:21.384: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:45:21.384: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:45:21.384: INFO: Container mock ready: false, restart count 0 Jan 15 02:45:21.384: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-hmpns started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:21.384: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-j92t4 started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:21.384: INFO: csi-mockplugin-attacher-0 started at 2023-01-15 02:41:00 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:45:21.384: INFO: hostexec-ip-172-20-62-124.ap-northeast-1.compute.internal-p7z8r started at 2023-01-15 02:41:32 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container agnhost-container ready: true, restart count 0 Jan 15 02:45:21.384: INFO: service-proxy-disabled-vlcxq started at 2023-01-15 02:43:29 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container service-proxy-disabled ready: false, restart count 0 Jan 15 02:45:21.384: INFO: csi-mockplugin-attacher-0 started at 2023-01-15 02:42:38 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:45:21.384: INFO: coredns-867df8f45c-bk6lp started at 2023-01-15 02:21:13 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container coredns ready: true, restart count 0 Jan 15 02:45:21.384: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-m72pw started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:21.384: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-74jlv started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:21.384: INFO: netserver-3 started at 2023-01-15 02:40:31 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container webserver ready: true, restart count 0 Jan 15 02:45:21.384: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-ftqzz started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:21.384: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-q75dx started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:21.384: INFO: cilium-j8bh4 started at 2023-01-15 02:20:28 +0000 UTC (1+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Init container clean-cilium-state ready: true, restart count 0 Jan 15 02:45:21.384: INFO: Container cilium-agent ready: true, restart count 0 Jan 15 02:45:21.384: INFO: ebs-csi-node-5qjb6 started at 2023-01-15 02:20:28 +0000 UTC (0+3 container statuses recorded) Jan 15 02:45:21.384: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:45:21.384: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:45:21.384: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:45:21.384: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-l4dnx started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:45:21.384: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-6bj8q started at 2023-01-15 02:39:56 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: false, restart count 0 Jan 15 02:45:21.384: INFO: pod-projected-configmaps-33eb312a-7b6e-41ff-961c-fcdd3860dfcc started at 2023-01-15 02:40:39 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container agnhost-container ready: false, restart count 0 Jan 15 02:45:21.384: INFO: netserver-3 started at 2023-01-15 02:42:25 +0000 UTC (0+1 container statuses recorded) Jan 15 02:45:21.384: INFO: Container webserver ready: false, restart count 0 Jan 15 02:45:22.233: INFO: Latency metrics for node ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:45:22.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7126" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sServices\sshould\sbe\sable\sto\shandle\slarge\srequests\:\shttp$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:443 Jan 15 02:51:31.669: Unexpected error: <*errors.errorString | 0xc0002482d0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:858
[BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 15 02:46:08.805: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename nettest �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to handle large requests: http /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:443 �[1mSTEP�[0m: Performing setup for networking test in namespace nettest-7353 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 15 02:46:09.856: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 15 02:46:10.917: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:13.069: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:15.068: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:17.069: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:19.069: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:21.068: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 15 02:46:23.069: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 15 02:46:25.069: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 15 02:46:27.068: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 15 02:46:29.068: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 15 02:46:31.069: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 15 02:46:31.369: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:33.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:35.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:37.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:39.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:41.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:43.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:45.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:47.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:49.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:51.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:53.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:55.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:57.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:46:59.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:01.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:03.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:05.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:07.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:09.529: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:11.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:13.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:15.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:17.522: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:19.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:21.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:23.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:25.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:27.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:29.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:31.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:33.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:35.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:37.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:39.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:41.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:43.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:45.523: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:47.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:49.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:51.527: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:53.524: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:55.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:57.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:47:59.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:01.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:03.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:05.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:07.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:09.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:11.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:13.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:15.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:17.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:19.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:21.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:23.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:25.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:27.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:29.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:31.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:33.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:35.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:37.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:39.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:41.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:43.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:45.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:47.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:49.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:51.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:53.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:55.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:57.524: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:48:59.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:01.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:03.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:05.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:07.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:09.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:11.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:13.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:15.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:17.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:19.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:21.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:23.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:25.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:27.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:29.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:31.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:33.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:35.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:37.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:39.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:41.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:43.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:45.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:47.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:49.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:51.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:53.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:55.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:57.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:49:59.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:01.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:03.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:05.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:07.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:09.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:11.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:13.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:15.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:17.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:19.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:21.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:23.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:25.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:27.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:29.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:31.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:33.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:35.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:37.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:39.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:41.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:43.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:45.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:47.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:49.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:51.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:53.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:55.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:57.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:50:59.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:01.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:03.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:05.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:07.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:09.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:11.521: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:13.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:15.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:17.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:19.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:21.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:23.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:25.519: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:27.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:29.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:31.520: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:31.669: INFO: The status of Pod netserver-1 is Pending, waiting for it to be Running (with Ready = true) Jan 15 02:51:31.669: FAIL: Unexpected error: <*errors.errorString | 0xc0002482d0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).createNetProxyPods(0xc00285e460, {0x70da7f5, 0x9}, 0xc004905440) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:858 +0x1d4 k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setupCore(0xc00285e460, 0xc004905440) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:760 +0x5b k8s.io/kubernetes/test/e2e/framework/network.(*NetworkingTestConfig).setup(0xc00285e460, 0x34) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:775 +0x3b k8s.io/kubernetes/test/e2e/framework/network.NewNetworkingTestConfig(0xc0012c3600, {0x0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:129 +0x125 k8s.io/kubernetes/test/e2e/network.glob..func20.6.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/networking.go:444 +0x36 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0009464e0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "nettest-7353". �[1mSTEP�[0m: Found 43 events. Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:10 +0000 UTC - event for netserver-0: {default-scheduler } Scheduled: Successfully assigned nettest-7353/netserver-0 to ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:10 +0000 UTC - event for netserver-1: {default-scheduler } Scheduled: Successfully assigned nettest-7353/netserver-1 to ip-172-20-49-36.ap-northeast-1.compute.internal Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:10 +0000 UTC - event for netserver-2: {default-scheduler } Scheduled: Successfully assigned nettest-7353/netserver-2 to ip-172-20-51-245.ap-northeast-1.compute.internal Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:10 +0000 UTC - event for netserver-3: {default-scheduler } Scheduled: Successfully assigned nettest-7353/netserver-3 to ip-172-20-62-124.ap-northeast-1.compute.internal Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:11 +0000 UTC - event for netserver-1: {kubelet ip-172-20-49-36.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e273083761922bf6a0325c45b243291c4720d7d4a6bebfe1e1cbb2495f56e834" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:12 +0000 UTC - event for netserver-0: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7f748f256737f834699d4aab12afd11a062e4762b190cbb8cd0d3f5d0b170768" network for pod "netserver-0": networkPlugin cni failed to set up pod "netserver-0_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:12 +0000 UTC - event for netserver-1: {kubelet ip-172-20-49-36.ap-northeast-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:12 +0000 UTC - event for netserver-2: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "08fb1b63c9e5c81af5167a1afdcc1d5b4c15b0e901738705cbc0eb57d40d6ff2" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:12 +0000 UTC - event for netserver-3: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1042788cb9663138c69248f21b3baf6cbefb8fa6cf79169daee390bc660d1a56" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:13 +0000 UTC - event for netserver-0: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:13 +0000 UTC - event for netserver-1: {kubelet ip-172-20-49-36.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c94389fe419ed4c23e3a26df3d1495f77c81e2ac7170987342acd9a0e497a61d" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:13 +0000 UTC - event for netserver-2: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:14 +0000 UTC - event for netserver-2: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b66386808429c9c4c9b6f668da4a2fe413fcbcc2f8694bf803404416a0ebaf1a" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:14 +0000 UTC - event for netserver-3: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:15 +0000 UTC - event for netserver-0: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c33436d9896685c2ad6ccf59d2b7de333cb6637878f7ba92fff83bbc547664e1" network for pod "netserver-0": networkPlugin cni failed to set up pod "netserver-0_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:15 +0000 UTC - event for netserver-1: {kubelet ip-172-20-49-36.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9d54eec17a95fa8b39dc7351c7f16814d6a71c404f6e2aeb591729a6109ad1c1" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:15 +0000 UTC - event for netserver-2: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "5393ab0374f15a7b3b52b1a4e9be5dcb2e3257456a11b3f2fb4fea8e279598dd" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:15 +0000 UTC - event for netserver-3: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "6eadff723cf5e67ab1c5440f6836b9af5b7c670751f2c375f9b7b5597eb05b7a" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:17 +0000 UTC - event for netserver-1: {kubelet ip-172-20-49-36.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d8e6726775756d6bc2943cb05756123d1cdd195c1b99c0cdae30a06f205ebdf1" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:17 +0000 UTC - event for netserver-2: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f96676da71f16fb5dd9e51d4b6ca028ff56ce65e995fed6f4e398bde97146ab2" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:17 +0000 UTC - event for netserver-3: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "2b0f5e0e4239b84acb7161e07dd7ad7e4203dd29f5745abd6b0d0c22690c0288" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:18 +0000 UTC - event for netserver-0: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} Created: Created container webserver Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:18 +0000 UTC - event for netserver-0: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} Started: Started container webserver Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:18 +0000 UTC - event for netserver-0: {kubelet ip-172-20-43-202.ap-northeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:18 +0000 UTC - event for netserver-2: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f33c6f099e4dc27a17eb11985bbc289ec315d16febb2ef008ed1df5b01faf540" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:19 +0000 UTC - event for netserver-1: {kubelet ip-172-20-49-36.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c29744235c4a660e4bff3fea58b055ac8d4e3a29d5ed4e4eca53a65244befc31" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:19 +0000 UTC - event for netserver-2: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "08c914809dec2a6c5430514fe9f46da1ddf38dfe657562d25b8d95e07e161a90" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:19 +0000 UTC - event for netserver-3: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e007f96808262e75f4b9467264f69b202963bf251b364b5d82d68417b3f7de13" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:21 +0000 UTC - event for netserver-1: {kubelet ip-172-20-49-36.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "20ea7973abb6623f3a496bddfc5a16d7d0572f1e232bc5a6e77cdb98d34efa35" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:21 +0000 UTC - event for netserver-3: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e47caa7525f6d7a55828ccd07f2364a14fd8674a3f50f094a79651c737fd915e" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:22 +0000 UTC - event for netserver-2: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "aec40b80e5811b39f687accc2f01be3bdc5446bc6197e766e9c14c392efc69c8" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:23 +0000 UTC - event for netserver-1: {kubelet ip-172-20-49-36.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1fa7cdfd32d4ee567f36dc243bca8ffdb6efe977339b05c4112a26f9d0a0c9d7" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:23 +0000 UTC - event for netserver-3: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "664ba8275bbff410027dbf90718a90d0d749b1294dd059134569279c42bbcb9e" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:24 +0000 UTC - event for netserver-2: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "c22184e6aca8f34767a01a79139794aade3226127da13a8453c013db3014da7b" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:24 +0000 UTC - event for netserver-3: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "0d2790b4beca1b1a528ca51b589e5d7c7c5934170162c39572fae0f5d9f0a56c" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:25 +0000 UTC - event for netserver-1: {kubelet ip-172-20-49-36.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e89272d7b26c3f72183877ebbfa13a87033f29c7ef1769580f2a87da6eb6f2f6" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:25 +0000 UTC - event for netserver-2: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "91978cab9e593e24956aa136606e9128f01a2b24c1b8a9ab027f0ba5f561a51e" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:26 +0000 UTC - event for netserver-1: {kubelet ip-172-20-49-36.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "571a40e9a9d06b88afe6c254b81340bb0b637d77abc6d37c7e0386e3cdb33a67" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:26 +0000 UTC - event for netserver-3: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7578ebca209632bdd7814aa43c62c396168dcb7396cebe31896d2e41af44b141" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:27 +0000 UTC - event for netserver-2: {kubelet ip-172-20-51-245.ap-northeast-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7951b24f7266fb61c4e18f481166866076e32b28b5f496c2b165e3b94ba36e92" network for pod "netserver-2": networkPlugin cni failed to set up pod "netserver-2_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:27 +0000 UTC - event for netserver-3: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} FailedCreatePodSandBox: Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8d8f703a97257a3bef8cfbf5d4a6cb31ddac75dd229a93dc846b5163d031a6c7" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:28 +0000 UTC - event for netserver-1: {kubelet ip-172-20-49-36.ap-northeast-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "729108eb6adbbe3bde752aff26ecc6ba493ec84e0c562fb2934f07590b8ec6e7" network for pod "netserver-1": networkPlugin cni failed to set up pod "netserver-1_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.821: INFO: At 2023-01-15 02:46:29 +0000 UTC - event for netserver-3: {kubelet ip-172-20-62-124.ap-northeast-1.compute.internal} FailedCreatePodSandBox: (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "f47d18ef42b6efade2b8c5cce094ba7a33b4a6a8b8af54900dfc583a9ba9abf8" network for pod "netserver-3": networkPlugin cni failed to set up pod "netserver-3_nettest-7353" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure No more IPs available Jan 15 02:51:31.971: INFO: POD NODE PHASE GRACE CONDITIONS Jan 15 02:51:31.971: INFO: netserver-0 ip-172-20-43-202.ap-northeast-1.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:30 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:10 +0000 UTC }] Jan 15 02:51:31.971: INFO: netserver-1 ip-172-20-49-36.ap-northeast-1.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:10 +0000 UTC }] Jan 15 02:51:31.971: INFO: netserver-2 ip-172-20-51-245.ap-northeast-1.compute.internal Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:10 +0000 UTC }] Jan 15 02:51:31.971: INFO: netserver-3 ip-172-20-62-124.ap-northeast-1.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:49:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:49:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-15 02:46:10 +0000 UTC }] Jan 15 02:51:31.971: INFO: Jan 15 02:51:32.302: INFO: Unable to fetch nettest-7353/netserver-1/webserver logs: the server rejected our request for an unknown reason (get pods netserver-1) Jan 15 02:51:32.453: INFO: Unable to fetch nettest-7353/netserver-2/webserver logs: the server rejected our request for an unknown reason (get pods netserver-2) Jan 15 02:51:32.764: INFO: Logging node info for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:51:32.914: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-43-202.ap-northeast-1.compute.internal bc0ee9cc-d624-47c0-9b67-04d40afa9f71 49892 0 2023-01-15 02:20:27 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-northeast-1 failure-domain.beta.kubernetes.io/zone:ap-northeast-1a kubelet_cleanup:true kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-43-202.ap-northeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-northeast-1a topology.hostpath.csi/node:ip-172-20-43-202.ap-northeast-1.compute.internal topology.kubernetes.io/region:ap-northeast-1 topology.kubernetes.io/zone:ap-northeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-011b6dcd6ec6449ff"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-15 02:20:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {e2e.test Update v1 2023-01-15 02:39:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kubelet_cleanup":{}}}} } {kube-controller-manager Update v1 2023-01-15 02:42:39 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2023-01-15 02:45:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-northeast-1a/i-011b6dcd6ec6449ff,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{50478186496 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4078206976 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{45430367772 0} {<nil>} 45430367772 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3973349376 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-15 02:51:11 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-15 02:51:11 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-15 02:51:11 +0000 UTC,LastTransitionTime:2023-01-15 02:20:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-15 02:51:11 +0000 UTC,LastTransitionTime:2023-01-15 02:20:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.43.202,},NodeAddress{Type:ExternalIP,Address:13.231.182.149,},NodeAddress{Type:Hostname,Address:ip-172-20-43-202.ap-northeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-43-202.ap-northeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-231-182-149.ap-northeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec22781cfe518dfd12a3a957bb6e997a,SystemUUID:ec22781c-fe51-8dfd-12a3-a957bb6e997a,BootID:949cecd1-b820-47c2-afb4-e8715b8e9ccd,KernelVersion:4.19.0-23-cloud-amd64,OSImage:Debian GNU/Linux 10 (buster),ContainerRuntimeVersion:docker://20.10.17,KubeletVersion:v1.23.15,KubeProxyVersion:v1.23.15,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5 quay.io/cilium/cilium:v1.12.5],SizeBytes:456569528,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:253371792,},ContainerImage{Names:[nginx@sha256:b8f2383a95879e1ae064940d9a200f67a6c79e710ed82ac42263397367e7cc4e nginx:latest],SizeBytes:141812353,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:126894770,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:126872991,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:124758741,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.15 registry.k8s.io/kube-proxy-amd64:v1.23.15],SizeBytes:112336036,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:f0c5de192d832e7c1daa6580d4a62e8fa6fc8eabc0917ae4cb7ed4d15e95b59e registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.14.1],SizeBytes:94471559,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:56382329,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:56379269,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:53956308,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:53531990,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:53500826,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:53499570,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:49176472,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:42321438,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:29977921,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:19662887,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:19582112,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:18699436,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:17680993,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:406f59599991916d2942d8d02f076d957ed71b541ee19f09fc01723a6e6f5932 registry.k8s.io/sig-storage/livenessprobe:v2.6.0],SizeBytes:17611980,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:17175488,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:16032814,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:682696,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-01c93361e2b930fdd kubernetes.io/csi/ebs.csi.aws.com^vol-0286d26f63c64e1e2 kubernetes.io/csi/ebs.csi.aws.com^vol-0e64399c317c780b9],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-01c93361e2b930fdd,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0e64399c317c780b9,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0286d26f63c64e1e2,DevicePath:,},},Config:nil,},} Jan 15 02:51:32.914: INFO: Logging kubelet events for node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:51:33.065: INFO: Logging pods the kubelet thinks is on node ip-172-20-43-202.ap-northeast-1.compute.internal Jan 15 02:51:33.247: INFO: csi-hostpathplugin-0 started at 2023-01-15 02:51:27 +0000 UTC (0+7 container statuses recorded) Jan 15 02:51:33.247: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:51:33.247: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:51:33.247: INFO: Container csi-resizer ready: false, restart count 0 Jan 15 02:51:33.247: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 15 02:51:33.247: INFO: Container hostpath ready: false, restart count 0 Jan 15 02:51:33.247: INFO: Container liveness-probe ready: false, restart count 0 Jan 15 02:51:33.247: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 15 02:51:33.247: INFO: pod-configmaps-b8f7d393-de83-47a9-955b-cd23b047200d started at 2023-01-15 02:48:00 +0000 UTC (0+1 container statuses recorded) Jan 15 02:51:33.247: INFO: Container agnhost-container ready: false, restart count 0 Jan 15 02:51:33.247: INFO: csi-mockplugin-0 started at 2023-01-15 02:46:30 +0000 UTC (0+4 container statuses recorded) Jan 15 02:51:33.247: INFO: Container busybox ready: false, restart count 0 Jan 15 02:51:33.247: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:51:33.247: INFO: Container driver-registrar ready: false, restart count 0 Jan 15 02:51:33.247: INFO: Container mock ready: false, restart count 0 Jan 15 02:51:33.247: INFO: local-injector started at 2023-01-15 02:51:06 +0000 UTC (0+1 container statuses recorded) Jan 15 02:51:33.247: INFO: Container local-injector ready: false, restart count 0 Jan 15 02:51:33.247: INFO: ebs-csi-node-bwkxc started at 2023-01-15 02:20:28 +0000 UTC (0+3 container statuses recorded) Jan 15 02:51:33.247: INFO: Container ebs-plugin ready: true, restart count 0 Jan 15 02:51:33.247: INFO: Container liveness-probe ready: true, restart count 0 Jan 15 02:51:33.247: INFO: Container node-driver-registrar ready: true, restart count 0 Jan 15 02:51:33.247: INFO: csi-hostpathplugin-0 started at 2023-01-15 02:49:05 +0000 UTC (0+7 container statuses recorded) Jan 15 02:51:33.247: INFO: Container csi-attacher ready: false, restart count 0 Jan 15 02:51:33.247: INFO: Container csi-provisioner ready: false, restart count 0 Jan 15 02:51:33.247: INFO: Container csi-resizer ready: false, restart count 0 Jan 15 02:51:33.248: INFO: Container csi-snapshotter ready: false, restart count 0 Jan 15 02:51:33.248: INFO: Container hostpath ready: false, restart count 0 Jan 15 02:51:33.248: INFO: Container liveness-probe ready: false, restart count 0 Jan 15 02:51:33.248: INFO: Container node-driver-registrar ready: false, restart count 0 Jan 15 02:51:33.248: INFO: netserver-0 started at 2023-01-15 02:46:10 +0000 UTC (0+1 container statuses recorded) Jan 15 02:51:33.248: INFO: Container webserver ready: true, restart count 0 Jan 15 02:51:33.248: INFO: exec-volume-test-dynamicpv-t4jj started at 2023-01-15 02:47:24 +0000 UTC (0+1 container statuses recorded) Jan 15 02:51:33.248: INFO: Container exec-container-dynamicpv-t4jj ready: false, restart count 0 Jan 15 02:51:33.248: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-rjqfv started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:51:33.248: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:51:33.248: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-8sg5j started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:51:33.248: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:51:33.248: INFO: pod-ed146c16-d6fe-4a57-9e00-84562904396e started at 2023-01-15 02:51:01 +0000 UTC (0+1 container statuses recorded) Jan 15 02:51:33.248: INFO: Container write-pod ready: false, restart count 0 Jan 15 02:51:33.248: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-zg7rx started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:51:33.248: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:51:33.248: INFO: cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063-kspn8 started at 2023-01-15 02:39:57 +0000 UTC (0+1 container statuses recorded) Jan 15 02:51:33.248: INFO: Container cleanup40-cc980baf-1eae-495d-8e73-38be4e2cc063 ready: true, restart count 0 Jan 15 02:51:33.248: INFO: pod-ready started at