Result | FAILURE |
Tests | 7 failed / 786 succeeded |
Started | |
Elapsed | 41m24s |
Revision | master |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\sAvailableReplicas\sshould\sget\supdated\saccordingly\swhen\sMinReadySeconds\sis\senabled$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:1165 Dec 2 09:20:40.893: Failed waiting for stateful set status.AvailableReplicas updated to 2: Get "https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/statefulset-1223/statefulsets/test-ss": http2: client connection lost /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:1186from junit_20.xml
[BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Dec 2 09:17:24.821: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [It] AvailableReplicas should get updated accordingly when MinReadySeconds is enabled /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:1165 Dec 2 09:17:26.500: INFO: Waiting for statefulset status.AvailableReplicas updated to 0 Dec 2 09:17:31.920: INFO: Waiting for statefulset status.AvailableReplicas updated to 2 Dec 2 09:17:32.130: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 0 Dec 2 09:17:42.340: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 0 Dec 2 09:17:52.342: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 0 Dec 2 09:18:02.342: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 0 Dec 2 09:18:12.340: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 0 Dec 2 09:18:22.340: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 0 Dec 2 09:18:32.368: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 1 Dec 2 09:18:42.346: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 1 Dec 2 09:18:52.341: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 1 Dec 2 09:19:02.340: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 1 Dec 2 09:19:12.345: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 1 Dec 2 09:19:22.340: INFO: Waiting for stateful set status.AvailableReplicas to become 2, currently 1 Dec 2 09:20:40.883: FAIL: Failed waiting for stateful set status.AvailableReplicas updated to 2: Get "https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io/apis/apps/v1/namespaces/statefulset-1223/statefulsets/test-ss": http2: client connection lost Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func9.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:1186 +0x395 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000789860, 0x735d4a0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "statefulset-1223". �[1mSTEP�[0m: Found 10 events. Dec 2 09:20:41.707: INFO: At 2022-12-02 09:17:26 +0000 UTC - event for test-ss: {statefulset-controller } SuccessfulCreate: create Pod test-ss-0 in StatefulSet test-ss successful Dec 2 09:20:41.710: INFO: At 2022-12-02 09:17:26 +0000 UTC - event for test-ss-0: {default-scheduler } Scheduled: Successfully assigned statefulset-1223/test-ss-0 to ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:41.710: INFO: At 2022-12-02 09:17:27 +0000 UTC - event for test-ss-0: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Dec 2 09:20:41.710: INFO: At 2022-12-02 09:17:27 +0000 UTC - event for test-ss-0: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Started: Started container webserver Dec 2 09:20:41.710: INFO: At 2022-12-02 09:17:27 +0000 UTC - event for test-ss-0: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Created: Created container webserver Dec 2 09:20:41.710: INFO: At 2022-12-02 09:18:26 +0000 UTC - event for test-ss: {statefulset-controller } SuccessfulCreate: create Pod test-ss-1 in StatefulSet test-ss successful Dec 2 09:20:41.710: INFO: At 2022-12-02 09:18:26 +0000 UTC - event for test-ss-1: {default-scheduler } Scheduled: Successfully assigned statefulset-1223/test-ss-1 to ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:41.710: INFO: At 2022-12-02 09:18:27 +0000 UTC - event for test-ss-1: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Dec 2 09:20:41.710: INFO: At 2022-12-02 09:18:27 +0000 UTC - event for test-ss-1: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Created: Created container webserver Dec 2 09:20:41.710: INFO: At 2022-12-02 09:18:27 +0000 UTC - event for test-ss-1: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Started: Started container webserver Dec 2 09:20:41.950: INFO: POD NODE PHASE GRACE CONDITIONS Dec 2 09:20:41.950: INFO: test-ss-0 ip-172-20-34-182.ap-southeast-1.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:17:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:17:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:17:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:17:26 +0000 UTC }] Dec 2 09:20:41.951: INFO: test-ss-1 ip-172-20-37-90.ap-southeast-1.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:18:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:18:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:18:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:18:26 +0000 UTC }] Dec 2 09:20:41.951: INFO: Dec 2 09:20:42.837: INFO: Logging node info for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:43.063: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-34-182.ap-southeast-1.compute.internal fd7593c8-1a7c-4e6d-9018-4c36698568dc 38632 0 2022-12-02 09:02:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-34-182.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-34-182.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7299":"csi-mock-csi-mock-volumes-7299","ebs.csi.aws.com":"i-070fdf3c5d5f93304"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.34.182/19 projectcalico.org/IPv4IPIPTunnelAddr:100.116.72.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {Go-http-client Update v1 2022-12-02 09:03:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-070fdf3c5d5f93304,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:22 +0000 UTC,LastTransitionTime:2022-12-02 09:03:22 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:03:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.34.182,},NodeAddress{Type:ExternalIP,Address:54.169.57.14,},NodeAddress{Type:Hostname,Address:ip-172-20-34-182.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-34-182.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-169-57-14.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec264a17458d690f294e12b6a6b2138c,SystemUUID:ec264a17-458d-690f-294e-12b6a6b2138c,BootID:37b6e011-229a-4491-b86f-f149d97d10c0,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:20:43.064: INFO: Logging kubelet events for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:43.283: INFO: Logging pods the kubelet thinks is on node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:43.717: INFO: simpletest.rc-rptqs started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.717: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.717: INFO: pod-client started at 2022-12-02 09:19:00 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.717: INFO: Container pod-client ready: true, restart count 0 Dec 2 09:20:43.717: INFO: coredns-5556cb978d-bx2m5 started at 2022-12-02 09:03:10 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.717: INFO: Container coredns ready: true, restart count 0 Dec 2 09:20:43.717: INFO: csi-mockplugin-0 started at 2022-12-02 09:18:23 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:43.717: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:43.717: INFO: Container driver-registrar ready: true, restart count 0 Dec 2 09:20:43.717: INFO: Container mock ready: true, restart count 0 Dec 2 09:20:43.717: INFO: ss2-2 started at 2022-12-02 09:19:22 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.717: INFO: Container webserver ready: false, restart count 0 Dec 2 09:20:43.717: INFO: simpletest.rc-w9lsq started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.717: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.717: INFO: simpletest.rc-swnct started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.717: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.717: INFO: simpletest.rc-tfx9v started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.717: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.717: INFO: simpletest.rc-rlzhz started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.717: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.717: INFO: simpletest.rc-ntn9m started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.717: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.717: INFO: calico-node-xhqfx started at 2022-12-02 09:02:23 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:43.717: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:43.717: INFO: Init container install-cni ready: true, restart count 0 Dec 2 09:20:43.717: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:43.717: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:43.717: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:43.717: INFO: startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c started at 2022-12-02 09:16:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.717: INFO: Container busybox ready: false, restart count 0 Dec 2 09:20:43.717: INFO: test-ss-0 started at 2022-12-02 09:17:26 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.717: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:43.717: INFO: kube-proxy-ip-172-20-34-182.ap-southeast-1.compute.internal started at 2022-12-02 09:02:02 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.717: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:43.718: INFO: ebs-csi-node-4b4zl started at 2022-12-02 09:02:23 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:43.718: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:43.718: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:43.718: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:44.498: INFO: Latency metrics for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:44.498: INFO: Logging node info for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:44.716: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-90.ap-southeast-1.compute.internal f779b12d-0e95-4e7f-929e-368941a29b99 40279 0 2022-12-02 09:02:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-90.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-37-90.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-001dd83f455b4a895"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.37.90/19 projectcalico.org/IPv4IPIPTunnelAddr:100.114.18.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:02:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:19:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-001dd83f455b4a895,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:05 +0000 UTC,LastTransitionTime:2022-12-02 09:03:05 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:02:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.90,},NodeAddress{Type:ExternalIP,Address:13.212.195.103,},NodeAddress{Type:Hostname,Address:ip-172-20-37-90.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-90.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-212-195-103.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec216e9b184e3e44fb8ed6af9b651047,SystemUUID:ec216e9b-184e-3e44-fb8e-d6af9b651047,BootID:0bbb1eb8-60c7-4bb1-b8c7-bb110f238f78,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:20:44.717: INFO: Logging kubelet events for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:44.938: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:45.158: INFO: agnhost-primary-dgxqj started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container agnhost-primary ready: false, restart count 0 Dec 2 09:20:45.158: INFO: ebs-csi-node-vswvn started at 2022-12-02 09:02:04 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:45.158: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:45.158: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:45.158: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:45.158: INFO: test-ss-1 started at 2022-12-02 09:18:26 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:45.158: INFO: execpodws7zw started at 2022-12-02 09:20:42 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container agnhost-container ready: false, restart count 0 Dec 2 09:20:45.158: INFO: pod-secrets-0da0406d-ca0f-4f4d-84a5-33a16c483cff started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container secret-volume-test ready: false, restart count 0 Dec 2 09:20:45.158: INFO: pod-terminate-status-0-14 started at 2022-12-02 09:20:41 +0000 UTC (1+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Init container fail ready: false, restart count 0 Dec 2 09:20:45.158: INFO: Container blocked ready: false, restart count 0 Dec 2 09:20:45.158: INFO: simpletest.rc-zj2ft started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:45.158: INFO: test-webserver-98190dda-eab4-4a0b-a4ec-afbb6264f9c0 started at 2022-12-02 09:18:17 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container test-webserver ready: true, restart count 0 Dec 2 09:20:45.158: INFO: coredns-autoscaler-85fcbbb64-kb6k7 started at 2022-12-02 09:02:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container autoscaler ready: true, restart count 0 Dec 2 09:20:45.158: INFO: simpletest.rc-njxsz started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:45.158: INFO: httpd started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container httpd ready: false, restart count 0 Dec 2 09:20:45.158: INFO: bin-falseb55ef21a-ef68-4260-9830-7a34a8977c97 started at 2022-12-02 09:20:42 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container bin-falseb55ef21a-ef68-4260-9830-7a34a8977c97 ready: false, restart count 0 Dec 2 09:20:45.158: INFO: kube-proxy-ip-172-20-37-90.ap-southeast-1.compute.internal started at 2022-12-02 09:01:54 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:45.158: INFO: calico-node-cqg7n started at 2022-12-02 09:02:04 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:45.158: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:20:45.158: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:45.158: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:45.158: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:45.158: INFO: simpletest.rc-r9d9b started at 2022-12-02 09:18:34 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:45.158: INFO: simpletest.rc-t5ztv started at 2022-12-02 09:18:31 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:45.158: INFO: simpletest.rc-xqqbd started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:45.158: INFO: coredns-5556cb978d-pztr5 started at 2022-12-02 09:02:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:45.158: INFO: Container coredns ready: true, restart count 0 Dec 2 09:20:46.564: INFO: Latency metrics for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:46.564: INFO: Logging node info for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:46.777: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-49-67.ap-southeast-1.compute.internal 81600d2c-3d2a-4421-913e-e1c53c1ad1df 41217 0 2022-12-02 09:02:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a io.kubernetes.storage.mock/node:some-mock-node kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-49-67.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-49-67.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1102":"ip-172-20-49-67.ap-southeast-1.compute.internal","ebs.csi.aws.com":"i-056f60b74d454bea7"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.49.67/19 projectcalico.org/IPv4IPIPTunnelAddr:100.106.24.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:03:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-02 09:18:47 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-12-02 09:18:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-056f60b74d454bea7,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:19 +0000 UTC,LastTransitionTime:2022-12-02 09:03:19 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:03:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.49.67,},NodeAddress{Type:ExternalIP,Address:13.228.79.89,},NodeAddress{Type:Hostname,Address:ip-172-20-49-67.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-49-67.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-228-79-89.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2bd833fc2a274ccf3bf225f245ddce,SystemUUID:ec2bd833-fc2a-274c-cf3b-f225f245ddce,BootID:1ab59414-4d0c-4bc8-bb64-5f41a1b02c74,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[docker.io/library/nginx@sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba docker.io/library/nginx:latest],SizeBytes:56833911,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:dd6d57960dc104a4ee0fa7c58c6faa3e38725561af374c17f8cb905f7f73ba66 k8s.gcr.io/build-image/debian-iptables:bullseye-v1.1.0],SizeBytes:27059231,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-086a725fe4e89b13b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-086a725fe4e89b13b,DevicePath:,},},Config:nil,},} Dec 2 09:20:46.779: INFO: Logging kubelet events for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:46.998: INFO: Logging pods the kubelet thinks is on node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:47.227: INFO: simpletest.rc-s8s8z started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:47.228: INFO: private started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:47.228: INFO: externalsvc-gfw8b started at 2022-12-02 09:19:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container externalsvc ready: true, restart count 0 Dec 2 09:20:47.228: INFO: slave started at 2022-12-02 09:19:22 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:47.228: INFO: ss-0 started at 2022-12-02 09:19:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:47.228: INFO: svc-latency-rc-n6rnr started at 2022-12-02 09:19:15 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container svc-latency-rc ready: true, restart count 0 Dec 2 09:20:47.228: INFO: calico-node-n6lj9 started at 2022-12-02 09:02:20 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:47.228: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:20:47.228: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:47.228: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:47.228: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:47.228: INFO: master started at 2022-12-02 09:19:13 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:47.228: INFO: downwardapi-volume-e3f86704-2ad4-4471-80f7-f49d1890acfa started at 2022-12-02 09:20:44 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container client-container ready: false, restart count 0 Dec 2 09:20:47.228: INFO: simpletest.rc-xt5qf started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:47.228: INFO: simpletest.rc-s98w8 started at 2022-12-02 09:18:31 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:47.228: INFO: kube-proxy-ip-172-20-49-67.ap-southeast-1.compute.internal started at 2022-12-02 09:01:59 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:47.228: INFO: simpletest.rc-q75ts started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:47.228: INFO: oidc-discovery-validator started at 2022-12-02 09:19:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container oidc-discovery-validator ready: false, restart count 0 Dec 2 09:20:47.228: INFO: simpletest.rc-sdlx6 started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:47.228: INFO: simpletest.rc-vjkr4 started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:47.228: INFO: simpletest.rc-qfccr started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:47.228: INFO: ss2-0 started at 2022-12-02 09:19:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.228: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:47.228: INFO: ebs-csi-node-w9kzj started at 2022-12-02 09:02:20 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:47.228: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:47.228: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:47.228: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:47.228: INFO: default started at 2022-12-02 09:20:44 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.229: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:47.229: INFO: csi-hostpathplugin-0 started at 2022-12-02 09:18:29 +0000 UTC (0+7 container statuses recorded) Dec 2 09:20:47.229: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:20:47.229: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:47.229: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:20:47.229: INFO: Container csi-snapshotter ready: true, restart count 0 Dec 2 09:20:47.229: INFO: Container hostpath ready: true, restart count 0 Dec 2 09:20:47.229: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:47.229: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:47.229: INFO: simpletest.rc-nxlcw started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.229: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:48.365: INFO: Latency metrics for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:48.365: INFO: Logging node info for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:48.594: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-55-194.ap-southeast-1.compute.internal 890854e9-f510-402d-9886-49c1d41318f4 34763 0 2022-12-02 09:00:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:master-ap-southeast-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-55-194.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-00b46fae03d775a19"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.55.194/19 projectcalico.org/IPv4IPIPTunnelAddr:100.104.201.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-02 09:00:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2022-12-02 09:01:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2022-12-02 09:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2022-12-02 09:01:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {Go-http-client Update v1 2022-12-02 09:02:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:02:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-00b46fae03d775a19,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3894931456 0} {<nil>} 3803644Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790073856 0} {<nil>} 3701244Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:02:00 +0000 UTC,LastTransitionTime:2022-12-02 09:02:00 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:01:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.194,},NodeAddress{Type:ExternalIP,Address:54.169.84.77,},NodeAddress{Type:Hostname,Address:ip-172-20-55-194.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-55-194.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-169-84-77.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2521391aeba8d2805b54ac578aa7d0,SystemUUID:ec252139-1aeb-a8d2-805b-54ac578aa7d0,BootID:4e785fe8-5068-4fd6-b8b0-5a4aae03c815,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:256a64fb44876d270f04ada1afd3ca431341f249aa52cbe2b3780f8f23961142 registry.k8s.io/etcdadm/etcd-manager:v3.0.20220727],SizeBytes:216364516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.14],SizeBytes:136567243,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.14],SizeBytes:126380852,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.14],SizeBytes:54860595,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:58cc91c551e9e941a752e205eefed1c8da56f97a51e054b3d341b67bb7bf27eb docker.io/calico/kube-controllers:v3.23.5],SizeBytes:53774679,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.24.5],SizeBytes:41269276,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.24.5],SizeBytes:40816784,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.24.5],SizeBytes:5130223,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:20:48.621: INFO: Logging kubelet events for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:48.850: INFO: Logging pods the kubelet thinks is on node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:51.765: INFO: kube-controller-manager-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:51.831: INFO: Container kube-controller-manager ready: true, restart count 2 Dec 2 09:20:51.835: INFO: kube-proxy-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:51.845: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:51.845: INFO: kube-scheduler-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:51.845: INFO: Container kube-scheduler ready: true, restart count 0 Dec 2 09:20:51.845: INFO: calico-node-xfrb9 started at 2022-12-02 09:01:32 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:51.845: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:51.851: INFO: Init container install-cni ready: true, restart count 0 Dec 2 09:20:51.851: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:51.851: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:51.851: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:51.851: INFO: kops-controller-7l85j started at 2022-12-02 09:01:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:51.851: INFO: Container kops-controller ready: true, restart count 0 Dec 2 09:20:51.851: INFO: etcd-manager-events-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:51.851: INFO: Container etcd-manager ready: true, restart count 0 Dec 2 09:20:51.851: INFO: etcd-manager-main-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:51.851: INFO: Container etcd-manager ready: true, restart count 0 Dec 2 09:20:51.851: INFO: kube-apiserver-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+2 container statuses recorded) Dec 2 09:20:51.851: INFO: Container healthcheck ready: true, restart count 0 Dec 2 09:20:51.851: INFO: Container kube-apiserver ready: true, restart count 1 Dec 2 09:20:51.851: INFO: ebs-csi-controller-55c8659c7c-sqq7m started at 2022-12-02 09:01:32 +0000 UTC (0+5 container statuses recorded) Dec 2 09:20:51.851: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:20:51.851: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:51.851: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:20:51.851: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:51.851: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:51.851: INFO: ebs-csi-node-rfwfq started at 2022-12-02 09:01:32 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:51.851: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:51.851: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:51.851: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:51.851: INFO: dns-controller-847484c97f-z8rs4 started at 2022-12-02 09:01:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:51.851: INFO: Container dns-controller ready: true, restart count 0 Dec 2 09:20:51.851: INFO: calico-kube-controllers-795c657547-9mz5t started at 2022-12-02 09:01:48 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:51.851: INFO: Container calico-kube-controllers ready: true, restart count 0 Dec 2 09:20:55.146: INFO: Latency metrics for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:55.154: INFO: Logging node info for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:55.685: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-60-164.ap-southeast-1.compute.internal 4d06e01c-27c4-4c2f-b118-647413c7ddf6 40537 0 2022-12-02 09:02:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-60-164.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-60-164.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9857":"ip-172-20-60-164.ap-southeast-1.compute.internal","ebs.csi.aws.com":"i-0a7cd257efff997b0"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.60.164/19 projectcalico.org/IPv4IPIPTunnelAddr:100.106.61.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:02:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-02 09:17:54 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-12-02 09:17:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-0a7cd257efff997b0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:11 +0000 UTC,LastTransitionTime:2022-12-02 09:03:11 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:38 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:38 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:38 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:19:38 +0000 UTC,LastTransitionTime:2022-12-02 09:02:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.60.164,},NodeAddress{Type:ExternalIP,Address:13.212.105.239,},NodeAddress{Type:Hostname,Address:ip-172-20-60-164.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-60-164.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-212-105-239.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ab9d0d1126900acfd3b82032bd9b,SystemUUID:ec28ab9d-0d11-2690-0acf-d3b82032bd9b,BootID:925eb9d6-3c66-49ad-be43-0411968ca10c,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[docker.io/library/nginx@sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba docker.io/library/nginx:latest],SizeBytes:56833911,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0413aef78391fa3e6],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0413aef78391fa3e6,DevicePath:,},},Config:nil,},} Dec 2 09:20:55.881: INFO: Logging kubelet events for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:56.406: INFO: Logging pods the kubelet thinks is on node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:25:47.194: INFO: externalsvc-kc489 started at 2022-12-02 09:19:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:47.197: INFO: Container externalsvc ready: true, restart count 0 Dec 2 09:25:47.199: INFO: csi-hostpathplugin-0 started at 2022-12-02 09:17:33 +0000 UTC (0+7 container statuses recorded) Dec 2 09:25:47.199: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:25:47.201: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:25:47.201: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:25:47.201: INFO: Container csi-snapshotter ready: true, restart count 0 Dec 2 09:25:47.201: INFO: Container hostpath ready: true, restart count 0 Dec 2 09:25:47.201: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:25:47.201: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:25:47.201: INFO: hostexec-ip-172-20-60-164.ap-southeast-1.compute.internal-qrptd started at 2022-12-02 09:20:43 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:47.201: INFO: Container agnhost-container ready: true, restart count 0 Dec 2 09:25:47.201: INFO: calico-node-gv4lf started at 2022-12-02 09:02:06 +0000 UTC (4+1 container statuses recorded) Dec 2 09:25:47.201: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:25:47.204: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:25:47.204: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:25:47.204: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:25:47.204: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:25:47.204: INFO: kube-proxy-ip-172-20-60-164.ap-southeast-1.compute.internal started at 2022-12-02 09:01:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:47.204: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:25:47.204: INFO: ss2-1 started at 2022-12-02 09:19:19 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:47.204: INFO: Container webserver ready: true, restart count 0 Dec 2 09:25:47.204: INFO: pod-terminate-status-2-14 started at 2022-12-02 09:19:29 +0000 UTC (1+1 container statuses recorded) Dec 2 09:25:47.204: INFO: Init container fail ready: false, restart count 0 Dec 2 09:25:47.204: INFO: Container blocked ready: false, restart count 0 Dec 2 09:25:47.204: INFO: pod-service-account-mountsa-nomountspec started at 2022-12-02 09:20:56 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:47.204: INFO: Container token-test ready: false, restart count 0 Dec 2 09:25:47.204: INFO: external-client started at 2022-12-02 09:19:27 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:47.204: INFO: Container external-client ready: true, restart count 0 Dec 2 09:25:47.204: INFO: ebs-csi-node-lrwc5 started at 2022-12-02 09:02:06 +0000 UTC (0+3 container statuses recorded) Dec 2 09:25:47.205: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:25:47.205: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:25:47.205: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:25:48.644: INFO: Latency metrics for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:25:48.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-1223" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sStatefulSet\sBasic\sStatefulSet\sfunctionality\s\[StatefulSetBasic\]\sshould\sperform\scanary\supdates\sand\sphased\srolling\supdates\sof\stemplate\smodifications\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Dec 2 09:25:47.119: Unexpected error: <*fmt.wrapError | 0xc003298040>: { msg: "unexpected error when reading response body. Please retry. Original error: http2: client connection lost", err: { s: "http2: client connection lost", }, } unexpected error when reading response body. Please retry. Original error: http2: client connection lost occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68from junit_21.xml
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data","total":-1,"completed":15,"skipped":154,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Dec 2 09:19:01.265: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-2522 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a new StatefulSet Dec 2 09:19:03.298: INFO: Found 1 stateful pods, waiting for 3 Dec 2 09:19:13.499: INFO: Found 1 stateful pods, waiting for 3 Dec 2 09:19:23.522: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 2 09:19:23.526: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 2 09:19:23.526: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 2 09:20:41.013: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 2 09:20:41.017: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 2 09:20:41.018: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Dec 2 09:20:42.193: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Not applying an update when the partition is greater than the number of replicas �[1mSTEP�[0m: Performing a canary update Dec 2 09:20:43.022: INFO: Updating stateful set ss2 Dec 2 09:20:43.431: INFO: Waiting for Pod statefulset-2522/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb �[1mSTEP�[0m: Restoring Pods to the correct revision when they are deleted Dec 2 09:20:56.227: INFO: Found 2 stateful pods, waiting for 3 E1202 09:25:47.038194 6682 request.go:1101] Unexpected error when reading response body: http2: client connection lost Dec 2 09:25:47.117: FAIL: Unexpected error: <*fmt.wrapError | 0xc003298040>: { msg: "unexpected error when reading response body. Please retry. Original error: http2: client connection lost", err: { s: "http2: client connection lost", }, } unexpected error when reading response body. Please retry. Original error: http2: client connection lost occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7b05650, 0xc002dc2300}, 0xc0006a2f00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:37 +0x50 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0xc00312c7e0, 0xc0001b4088}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d3b68, 0xc00005e048}, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x79d3b68, 0xc00005e048}, 0xc0043d2648, 0x2cc954a) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 +0x118 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d3b68, 0xc00005e048}, 0x48, 0x2cc8045, 0x28) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:594 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79d3b68, 0xc00005e048}, 0x2, 0xc002238f98, 0x2441ec7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000066880, 0xc002238fd8, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7b05650, 0xc002dc2300}, 0x3, 0x3, 0xc0006a2f00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func9.2.8() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:425 +0x1b73 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000970820, 0x735d4a0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a E1202 09:25:47.151360 6682 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Dec 2 09:25:47.119: Unexpected error:\n <*fmt.wrapError | 0xc003298040>: {\n msg: \"unexpected error when reading response body. Please retry. Original error: http2: client connection lost\",\n err: {\n s: \"http2: client connection lost\",\n },\n }\n unexpected error when reading response body. Please retry. Original error: http2: client connection lost\noccurred", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go", Line:68, FullStackTrace:"k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7b05650, 0xc002dc2300}, 0xc0006a2f00)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x153\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:37 +0x50\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0xc00312c7e0, 0xc0001b4088})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d3b68, 0xc00005e048}, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x79d3b68, 0xc00005e048}, 0xc0043d2648, 0x2cc954a)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 +0x118\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d3b68, 0xc00005e048}, 0x48, 0x2cc8045, 0x28)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:594 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79d3b68, 0xc00005e048}, 0x2, 0xc002238f98, 0x2441ec7)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000066880, 0xc002238fd8, 0x1)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7b05650, 0xc002dc2300}, 0x3, 0x3, 0xc0006a2f00)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:35 +0xbd\nk8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80\nk8s.io/kubernetes/test/e2e/apps.glob..func9.2.8()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:425 +0x1b73\nk8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697\nk8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19\ntesting.tRunner(0xc000970820, 0x735d4a0)\n\t/usr/local/go/src/testing/testing.go:1259 +0x102\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1306 +0x35a"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 148 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6c3bd00, 0xc00427a180}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x7d k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75 panic({0x6c3bd00, 0xc00427a180}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0x73 panic({0x62d47a0, 0x78aa9e0}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail({0xc000cf4d00, 0x18c}, {0xc0022384a0, 0x0, 0x40}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xdd k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc000cf4d00, 0x18c}, {0xc002238580, 0x70cab8a, 0xc0022385a0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7 k8s.io/kubernetes/test/e2e/framework.Fail({0xc001094300, 0x177}, {0xc000530290, 0xc001094300, 0x1}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:63 +0x149 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc0022386e8, {0x79bd678, 0xaa10408}, 0x0, {0x0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:79 +0x1bd k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc0022386e8, {0x79bd678, 0xaa10408}, {0x0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0x92 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x0, {0x78b0d60, 0xc003298040}, {0x0, 0xc004924690, 0x10}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xa9 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40 k8s.io/kubernetes/test/e2e/framework/statefulset.GetPodList({0x7b05650, 0xc002dc2300}, 0xc0006a2f00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/rest.go:68 +0x153 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:37 +0x50 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0xc00312c7e0, 0xc0001b4088}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d3b68, 0xc00005e048}, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x79d3b68, 0xc00005e048}, 0xc0043d2648, 0x2cc954a) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 +0x118 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d3b68, 0xc00005e048}, 0x48, 0x2cc8045, 0x28) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:594 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79d3b68, 0xc00005e048}, 0x2, 0xc002238f98, 0x2441ec7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc000066880, 0xc002238fd8, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunning({0x7b05650, 0xc002dc2300}, 0x3, 0x3, 0xc0006a2f00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:35 +0xbd k8s.io/kubernetes/test/e2e/framework/statefulset.WaitForRunningAndReady(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset/wait.go:80 k8s.io/kubernetes/test/e2e/apps.glob..func9.2.8() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:425 +0x1b73 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0001b4000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xba k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc001e355c8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x125 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00392eff0, 0xc001e35990, {0x78b4560, 0xc000066880}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x2a9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00392eff0, {0x78b4560, 0xc000066880}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xe7 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00475c000, 0xc00392eff0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0xe5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00475c000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x1a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00475c000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0xc5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00011a070, {0x7f5570193bb8, 0xc000970820}, {0x710a6bd, 0x40}, {0xc00055a060, 0x3, 0x3}, {0x7a2bdb8, 0xc000066880}, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x4d2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters({0x78baec0, 0xc000970820}, {0x710a6bd, 0x14}, {0xc00055e040, 0x3, 0x6}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x185 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x78baec0, 0xc000970820}, {0x710a6bd, 0x14}, {0xc000550000, 0x2, 0x2}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0xf9 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000970820, 0x735d4a0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Dec 2 09:25:48.052: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/2beff379-721f-11ed-88e2-f6fea4ddc280/kubectl --server=https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-2522 describe po ss2-0' Dec 2 09:25:49.218: INFO: stderr: "" Dec 2 09:25:49.218: INFO: stdout: "Name: ss2-0\nNamespace: statefulset-2522\nPriority: 0\nNode: ip-172-20-37-90.ap-southeast-1.compute.internal/172.20.37.90\nStart Time: Fri, 02 Dec 2022 09:20:55 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-57bbdd95cb\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations: cni.projectcalico.org/containerID: 4899c0fc4487539b8a8464581ee66a9192216103a0e2dd792abd690e6902bcfb\n cni.projectcalico.org/podIP: 100.114.18.70/32\n cni.projectcalico.org/podIPs: 100.114.18.70/32\nStatus: Running\nIP: 100.114.18.70\nIPs:\n IP: 100.114.18.70\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: containerd://8d1401fa52a92aee89fa7f211b6b5f189824759c39962731ac9dae45ac03503b\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\n Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Fri, 02 Dec 2022 09:20:56 +0000\n Ready: True\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bwbb6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-bwbb6:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4m54s default-scheduler Successfully assigned statefulset-2522/ss2-0 to ip-172-20-37-90.ap-southeast-1.compute.internal\n Normal Pulled 4m53s kubelet Container image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\" already present on machine\n Normal Created 4m53s kubelet Created container webserver\n Normal Started 4m53s kubelet Started container webserver\n" Dec 2 09:25:49.218: INFO: Output of kubectl describe ss2-0: Name: ss2-0 Namespace: statefulset-2522 Priority: 0 Node: ip-172-20-37-90.ap-southeast-1.compute.internal/172.20.37.90 Start Time: Fri, 02 Dec 2022 09:20:55 +0000 Labels: baz=blah controller-revision-hash=ss2-57bbdd95cb foo=bar statefulset.kubernetes.io/pod-name=ss2-0 Annotations: cni.projectcalico.org/containerID: 4899c0fc4487539b8a8464581ee66a9192216103a0e2dd792abd690e6902bcfb cni.projectcalico.org/podIP: 100.114.18.70/32 cni.projectcalico.org/podIPs: 100.114.18.70/32 Status: Running IP: 100.114.18.70 IPs: IP: 100.114.18.70 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: containerd://8d1401fa52a92aee89fa7f211b6b5f189824759c39962731ac9dae45ac03503b Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 Port: <none> Host Port: <none> State: Running Started: Fri, 02 Dec 2022 09:20:56 +0000 Ready: True Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bwbb6 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-bwbb6: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m54s default-scheduler Successfully assigned statefulset-2522/ss2-0 to ip-172-20-37-90.ap-southeast-1.compute.internal Normal Pulled 4m53s kubelet Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Normal Created 4m53s kubelet Created container webserver Normal Started 4m53s kubelet Started container webserver Dec 2 09:25:49.218: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/2beff379-721f-11ed-88e2-f6fea4ddc280/kubectl --server=https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-2522 logs ss2-0 --tail=100' Dec 2 09:25:50.272: INFO: stderr: "" Dec 2 09:25:50.272: INFO: stdout: "172.20.37.90 - - [02/Dec/2022:09:24:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:24:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.37.90 - - [02/Dec/2022:09:25:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" Dec 2 09:25:50.272: INFO: Last 100 log lines of ss2-0: 172.20.37.90 - - [02/Dec/2022:09:24:10 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:11 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:12 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:13 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:14 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:15 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:16 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:17 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:18 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:19 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:20 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:21 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:22 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:23 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:24 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:25 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:26 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:27 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:28 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:29 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:30 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:31 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:32 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:33 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:34 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:35 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:36 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:37 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:38 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:39 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:40 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:41 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:42 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:43 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:44 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:45 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:46 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:47 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:48 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:49 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:50 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:51 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:52 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:53 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:54 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:55 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:56 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:57 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:58 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:24:59 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:00 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:01 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:02 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:03 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:04 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:05 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:06 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:07 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:08 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:09 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:10 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:11 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:12 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:13 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:14 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:15 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:16 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:17 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:18 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:19 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:20 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:21 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:22 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:23 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:24 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:25 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:26 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:27 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:28 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:29 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:30 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:31 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:32 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:33 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:34 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:35 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:36 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:37 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:38 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:39 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:40 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:41 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:42 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:43 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:44 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:45 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:46 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:47 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:48 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.37.90 - - [02/Dec/2022:09:25:49 +0000] "GET /index.html HTTP/1.1" 200 45 Dec 2 09:25:50.272: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/2beff379-721f-11ed-88e2-f6fea4ddc280/kubectl --server=https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-2522 describe po ss2-1' Dec 2 09:25:51.543: INFO: stderr: "" Dec 2 09:25:51.543: INFO: stdout: "Name: ss2-1\nNamespace: statefulset-2522\nPriority: 0\nNode: ip-172-20-60-164.ap-southeast-1.compute.internal/172.20.60.164\nStart Time: Fri, 02 Dec 2022 09:19:19 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-57bbdd95cb\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-1\nAnnotations: cni.projectcalico.org/containerID: ee309f9df841542cec5cb7fec8ce7a65913fce8946d69e6d92d1e35ca357db4c\n cni.projectcalico.org/podIP: 100.106.61.139/32\n cni.projectcalico.org/podIPs: 100.106.61.139/32\nStatus: Running\nIP: 100.106.61.139\nIPs:\n IP: 100.106.61.139\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: containerd://808021e52fab5c2c85045df5fa29b89e1ce4da2935cdd29728794ed61e11944e\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\n Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Fri, 02 Dec 2022 09:19:21 +0000\n Ready: True\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2pln2 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-2pln2:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6m32s default-scheduler Successfully assigned statefulset-2522/ss2-1 to ip-172-20-60-164.ap-southeast-1.compute.internal\n Normal Pulled 6m31s kubelet Container image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\" already present on machine\n Normal Created 6m30s kubelet Created container webserver\n Normal Started 6m30s kubelet Started container webserver\n" Dec 2 09:25:51.543: INFO: Output of kubectl describe ss2-1: Name: ss2-1 Namespace: statefulset-2522 Priority: 0 Node: ip-172-20-60-164.ap-southeast-1.compute.internal/172.20.60.164 Start Time: Fri, 02 Dec 2022 09:19:19 +0000 Labels: baz=blah controller-revision-hash=ss2-57bbdd95cb foo=bar statefulset.kubernetes.io/pod-name=ss2-1 Annotations: cni.projectcalico.org/containerID: ee309f9df841542cec5cb7fec8ce7a65913fce8946d69e6d92d1e35ca357db4c cni.projectcalico.org/podIP: 100.106.61.139/32 cni.projectcalico.org/podIPs: 100.106.61.139/32 Status: Running IP: 100.106.61.139 IPs: IP: 100.106.61.139 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: containerd://808021e52fab5c2c85045df5fa29b89e1ce4da2935cdd29728794ed61e11944e Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 Port: <none> Host Port: <none> State: Running Started: Fri, 02 Dec 2022 09:19:21 +0000 Ready: True Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2pln2 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-2pln2: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m32s default-scheduler Successfully assigned statefulset-2522/ss2-1 to ip-172-20-60-164.ap-southeast-1.compute.internal Normal Pulled 6m31s kubelet Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Normal Created 6m30s kubelet Created container webserver Normal Started 6m30s kubelet Started container webserver Dec 2 09:25:51.543: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/2beff379-721f-11ed-88e2-f6fea4ddc280/kubectl --server=https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-2522 logs ss2-1 --tail=100' Dec 2 09:25:52.475: INFO: stderr: "" Dec 2 09:25:52.475: INFO: stdout: "172.20.60.164 - - [02/Dec/2022:09:24:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:24:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.60.164 - - [02/Dec/2022:09:25:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" Dec 2 09:25:52.476: INFO: Last 100 log lines of ss2-1: 172.20.60.164 - - [02/Dec/2022:09:24:13 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:14 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:15 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:16 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:17 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:18 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:19 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:20 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:21 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:22 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:23 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:24 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:25 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:26 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:27 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:28 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:29 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:30 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:31 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:32 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:33 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:34 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:35 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:36 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:37 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:38 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:39 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:40 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:41 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:42 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:43 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:44 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:45 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:46 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:47 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:48 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:49 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:50 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:51 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:52 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:53 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:54 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:55 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:56 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:57 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:58 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:24:59 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:00 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:01 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:02 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:03 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:04 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:05 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:06 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:07 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:08 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:09 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:10 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:11 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:12 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:13 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:14 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:15 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:16 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:17 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:18 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:19 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:20 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:21 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:22 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:23 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:24 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:25 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:26 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:27 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:28 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:29 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:30 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:31 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:32 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:33 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:34 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:35 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:36 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:37 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:38 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:39 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:40 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:41 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:42 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:43 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:44 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:45 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:46 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:47 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:48 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:49 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:50 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:51 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.60.164 - - [02/Dec/2022:09:25:52 +0000] "GET /index.html HTTP/1.1" 200 45 Dec 2 09:25:52.476: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/2beff379-721f-11ed-88e2-f6fea4ddc280/kubectl --server=https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-2522 describe po ss2-2' Dec 2 09:25:53.636: INFO: stderr: "" Dec 2 09:25:53.636: INFO: stdout: "Name: ss2-2\nNamespace: statefulset-2522\nPriority: 0\nNode: ip-172-20-34-182.ap-southeast-1.compute.internal/172.20.34.182\nStart Time: Fri, 02 Dec 2022 09:20:57 +0000\nLabels: baz=blah\n controller-revision-hash=ss2-5f8764d585\n foo=bar\n statefulset.kubernetes.io/pod-name=ss2-2\nAnnotations: cni.projectcalico.org/containerID: af1fafd3a663cc17e3d074e61fef2a9945df577f6c58bbd068b57547a924c991\n cni.projectcalico.org/podIP: 100.116.72.111/32\n cni.projectcalico.org/podIPs: 100.116.72.111/32\nStatus: Running\nIP: 100.116.72.111\nIPs:\n IP: 100.116.72.111\nControlled By: StatefulSet/ss2\nContainers:\n webserver:\n Container ID: containerd://baa50d0c11758c1e14ac9a7018dd8d8ee2874a706d79f29f0b5c49bd12b6b129\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.39-2\n Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6\n Port: <none>\n Host Port: <none>\n State: Running\n Started: Fri, 02 Dec 2022 09:20:58 +0000\n Ready: True\n Restart Count: 0\n Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ksm5h (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-ksm5h:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4m56s default-scheduler Successfully assigned statefulset-2522/ss2-2 to ip-172-20-34-182.ap-southeast-1.compute.internal\n Normal Pulled 4m55s kubelet Container image \"k8s.gcr.io/e2e-test-images/httpd:2.4.39-2\" already present on machine\n Normal Created 4m55s kubelet Created container webserver\n Normal Started 4m55s kubelet Started container webserver\n" Dec 2 09:25:53.636: INFO: Output of kubectl describe ss2-2: Name: ss2-2 Namespace: statefulset-2522 Priority: 0 Node: ip-172-20-34-182.ap-southeast-1.compute.internal/172.20.34.182 Start Time: Fri, 02 Dec 2022 09:20:57 +0000 Labels: baz=blah controller-revision-hash=ss2-5f8764d585 foo=bar statefulset.kubernetes.io/pod-name=ss2-2 Annotations: cni.projectcalico.org/containerID: af1fafd3a663cc17e3d074e61fef2a9945df577f6c58bbd068b57547a924c991 cni.projectcalico.org/podIP: 100.116.72.111/32 cni.projectcalico.org/podIPs: 100.116.72.111/32 Status: Running IP: 100.116.72.111 IPs: IP: 100.116.72.111 Controlled By: StatefulSet/ss2 Containers: webserver: Container ID: containerd://baa50d0c11758c1e14ac9a7018dd8d8ee2874a706d79f29f0b5c49bd12b6b129 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Image ID: k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 Port: <none> Host Port: <none> State: Running Started: Fri, 02 Dec 2022 09:20:58 +0000 Ready: True Restart Count: 0 Readiness: http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ksm5h (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-ksm5h: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m56s default-scheduler Successfully assigned statefulset-2522/ss2-2 to ip-172-20-34-182.ap-southeast-1.compute.internal Normal Pulled 4m55s kubelet Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.39-2" already present on machine Normal Created 4m55s kubelet Created container webserver Normal Started 4m55s kubelet Started container webserver Dec 2 09:25:53.636: INFO: Running '/home/prow/go/src/k8s.io/kops/_rundir/2beff379-721f-11ed-88e2-f6fea4ddc280/kubectl --server=https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io --kubeconfig=/root/.kube/config --namespace=statefulset-2522 logs ss2-2 --tail=100' Dec 2 09:25:54.658: INFO: stderr: "" Dec 2 09:25:54.658: INFO: stdout: "172.20.34.182 - - [02/Dec/2022:09:24:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:54 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:55 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:56 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:57 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:58 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:24:59 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:00 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:01 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:02 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:03 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:04 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:05 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:06 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:07 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:08 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:09 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:10 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:11 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:12 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:13 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:14 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:15 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:16 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:17 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:18 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:19 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:20 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:21 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:22 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:23 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:24 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:25 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:26 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:27 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:28 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:29 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:30 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:31 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:32 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:33 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:34 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:35 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:36 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:37 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:38 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:39 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:40 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:41 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:42 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:43 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:44 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:45 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:46 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:47 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:48 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:49 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:50 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:51 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:52 +0000] \"GET /index.html HTTP/1.1\" 200 45\n172.20.34.182 - - [02/Dec/2022:09:25:53 +0000] \"GET /index.html HTTP/1.1\" 200 45\n" Dec 2 09:25:54.658: INFO: Last 100 log lines of ss2-2: 172.20.34.182 - - [02/Dec/2022:09:24:14 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:15 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:16 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:17 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:18 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:19 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:20 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:21 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:22 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:23 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:24 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:25 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:26 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:27 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:28 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:29 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:30 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:31 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:32 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:33 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:34 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:35 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:36 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:37 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:38 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:39 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:40 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:41 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:42 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:43 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:44 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:45 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:46 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:47 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:48 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:49 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:50 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:51 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:52 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:53 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:54 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:55 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:56 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:57 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:58 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:24:59 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:00 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:01 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:02 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:03 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:04 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:05 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:06 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:07 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:08 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:09 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:10 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:11 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:12 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:13 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:14 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:15 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:16 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:17 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:18 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:19 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:20 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:21 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:22 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:23 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:24 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:25 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:26 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:27 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:28 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:29 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:30 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:31 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:32 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:33 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:34 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:35 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:36 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:37 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:38 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:39 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:40 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:41 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:42 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:43 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:44 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:45 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:46 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:47 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:48 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:49 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:50 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:51 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:52 +0000] "GET /index.html HTTP/1.1" 200 45 172.20.34.182 - - [02/Dec/2022:09:25:53 +0000] "GET /index.html HTTP/1.1" 200 45 Dec 2 09:25:54.658: INFO: Deleting all statefulset in ns statefulset-2522 Dec 2 09:25:54.874: INFO: Scaling statefulset ss2 to 0 Dec 2 09:26:35.729: INFO: Waiting for statefulset status.replicas updated to 0 Dec 2 09:26:35.941: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "statefulset-2522". �[1mSTEP�[0m: Found 42 events. Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:03 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:03 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-2522/ss2-0 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:04 +0000 UTC - event for ss2-0: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:04 +0000 UTC - event for ss2-0: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:04 +0000 UTC - event for ss2-0: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:19 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-1 in StatefulSet ss2 successful Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:19 +0000 UTC - event for ss2-1: {default-scheduler } Scheduled: Successfully assigned statefulset-2522/ss2-1 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:20 +0000 UTC - event for ss2-1: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:21 +0000 UTC - event for ss2-1: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:21 +0000 UTC - event for ss2-1: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:22 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-2 in StatefulSet ss2 successful Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:22 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-2522/ss2-2 to ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:23 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:23 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Created: Created container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:19:23 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Started: Started container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:42 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-2 in StatefulSet ss2 successful Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:42 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Killing: Stopping container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:43 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-2522/ss2-2 to ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:43 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Unhealthy: Readiness probe failed: Get "http://100.116.72.105:80/index.html": dial tcp 100.116.72.105:80: connect: connection refused Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:44 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.39-2" already present on machine Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:44 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Started: Started container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:44 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Created: Created container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:55 +0000 UTC - event for ss2-0: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Unhealthy: Readiness probe failed: Get "http://100.106.24.115:80/index.html": dial tcp 100.106.24.115:80: connect: invalid argument Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:55 +0000 UTC - event for ss2-0: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Killing: Stopping container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:55 +0000 UTC - event for ss2-0: {default-scheduler } Scheduled: Successfully assigned statefulset-2522/ss2-0 to ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:55 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Killing: Stopping container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:55 +0000 UTC - event for test: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint statefulset-2522/test: Operation cannot be fulfilled on endpoints "test": the object has been modified; please apply your changes to the latest version and try again Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:56 +0000 UTC - event for ss2-0: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Started: Started container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:56 +0000 UTC - event for ss2-0: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-2" already present on machine Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:56 +0000 UTC - event for ss2-0: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Created: Created container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:57 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-2522/ss2-2 to ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:58 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.39-2" already present on machine Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:58 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Created: Created container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:20:58 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Started: Started container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:25:55 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Unhealthy: Readiness probe failed: Get "http://100.116.72.111:80/index.html": dial tcp 100.116.72.111:80: connect: connection refused Dec 2 09:26:36.784: INFO: At 2022-12-02 09:25:55 +0000 UTC - event for ss2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Killing: Stopping container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:25:59 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-1 in StatefulSet ss2 successful Dec 2 09:26:36.784: INFO: At 2022-12-02 09:25:59 +0000 UTC - event for ss2-1: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Killing: Stopping container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:26:00 +0000 UTC - event for ss2-1: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Unhealthy: Readiness probe failed: Get "http://100.106.61.139:80/index.html": dial tcp 100.106.61.139:80: connect: connection refused Dec 2 09:26:36.784: INFO: At 2022-12-02 09:26:24 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-0 in StatefulSet ss2 successful Dec 2 09:26:36.784: INFO: At 2022-12-02 09:26:24 +0000 UTC - event for ss2-0: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Killing: Stopping container webserver Dec 2 09:26:36.784: INFO: At 2022-12-02 09:26:24 +0000 UTC - event for ss2-0: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Unhealthy: Readiness probe failed: Get "http://100.114.18.70:80/index.html": dial tcp 100.114.18.70:80: connect: connection refused Dec 2 09:26:36.995: INFO: POD NODE PHASE GRACE CONDITIONS Dec 2 09:26:36.995: INFO: Dec 2 09:26:37.206: INFO: Logging node info for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:26:37.417: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-34-182.ap-southeast-1.compute.internal fd7593c8-1a7c-4e6d-9018-4c36698568dc 44576 0 2022-12-02 09:02:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-34-182.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-34-182.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-070fdf3c5d5f93304"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.34.182/19 projectcalico.org/IPv4IPIPTunnelAddr:100.116.72.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {Go-http-client Update v1 2022-12-02 09:03:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:25:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status} {kube-controller-manager Update v1 2022-12-02 09:26:02 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-070fdf3c5d5f93304,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:22 +0000 UTC,LastTransitionTime:2022-12-02 09:03:22 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:25:51 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:25:51 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:25:51 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:25:51 +0000 UTC,LastTransitionTime:2022-12-02 09:03:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.34.182,},NodeAddress{Type:ExternalIP,Address:54.169.57.14,},NodeAddress{Type:Hostname,Address:ip-172-20-34-182.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-34-182.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-169-57-14.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec264a17458d690f294e12b6a6b2138c,SystemUUID:ec264a17-458d-690f-294e-12b6a6b2138c,BootID:37b6e011-229a-4491-b86f-f149d97d10c0,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-086a725fe4e89b13b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-086a725fe4e89b13b,DevicePath:,},},Config:nil,},} Dec 2 09:26:37.418: INFO: Logging kubelet events for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:26:37.631: INFO: Logging pods the kubelet thinks is on node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:26:38.057: INFO: externalip-test-jzxmp started at <nil> (0+0 container statuses recorded) Dec 2 09:26:38.057: INFO: fail-once-local-rl6bk started at 2022-12-02 09:25:58 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:38.058: INFO: Container c ready: false, restart count 1 Dec 2 09:26:38.058: INFO: ss-0 started at 2022-12-02 09:25:47 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:38.058: INFO: Container webserver ready: true, restart count 0 Dec 2 09:26:38.058: INFO: pod-subpath-test-configmap-btf6 started at 2022-12-02 09:26:20 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:38.058: INFO: Container test-container-subpath-configmap-btf6 ready: true, restart count 0 Dec 2 09:26:38.058: INFO: netserver-0 started at 2022-12-02 09:26:13 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:38.058: INFO: Container webserver ready: false, restart count 0 Dec 2 09:26:38.058: INFO: pod-c06f878c-28c7-41a1-87c2-738c5d086903 started at 2022-12-02 09:26:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:38.058: INFO: Container write-pod ready: true, restart count 0 Dec 2 09:26:38.058: INFO: calico-node-xhqfx started at 2022-12-02 09:02:23 +0000 UTC (4+1 container statuses recorded) Dec 2 09:26:38.058: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:26:38.058: INFO: Init container install-cni ready: true, restart count 0 Dec 2 09:26:38.058: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:26:38.058: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:26:38.058: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:26:38.058: INFO: security-context-30809941-26de-4cd4-9bdb-fbb207653907 started at <nil> (0+0 container statuses recorded) Dec 2 09:26:38.058: INFO: fail-once-local-fbzg2 started at 2022-12-02 09:26:16 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:38.058: INFO: Container c ready: false, restart count 1 Dec 2 09:26:38.058: INFO: ebs-csi-node-4b4zl started at 2022-12-02 09:02:23 +0000 UTC (0+3 container statuses recorded) Dec 2 09:26:38.058: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:26:38.058: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:26:38.058: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:26:38.058: INFO: ss2-0 started at 2022-12-02 09:26:27 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:38.058: INFO: Container webserver ready: true, restart count 0 Dec 2 09:26:38.058: INFO: kube-proxy-ip-172-20-34-182.ap-southeast-1.compute.internal started at 2022-12-02 09:02:02 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:38.058: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:26:38.058: INFO: fail-once-local-c98lz started at 2022-12-02 09:26:25 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:38.058: INFO: Container c ready: false, restart count 1 Dec 2 09:26:38.058: INFO: coredns-5556cb978d-bx2m5 started at 2022-12-02 09:03:10 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:38.058: INFO: Container coredns ready: true, restart count 0 Dec 2 09:26:38.058: INFO: hostexec-ip-172-20-34-182.ap-southeast-1.compute.internal-nw5kx started at 2022-12-02 09:25:58 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:38.058: INFO: Container agnhost-container ready: true, restart count 0 Dec 2 09:26:39.815: INFO: Latency metrics for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:26:39.815: INFO: Logging node info for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:26:40.026: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-90.ap-southeast-1.compute.internal f779b12d-0e95-4e7f-929e-368941a29b99 45086 0 2022-12-02 09:02:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-90.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-37-90.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-001dd83f455b4a895"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.37.90/19 projectcalico.org/IPv4IPIPTunnelAddr:100.114.18.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:02:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-02 09:20:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-12-02 09:20:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-001dd83f455b4a895,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:05 +0000 UTC,LastTransitionTime:2022-12-02 09:03:05 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:26:05 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:26:05 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:26:05 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:26:05 +0000 UTC,LastTransitionTime:2022-12-02 09:02:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.90,},NodeAddress{Type:ExternalIP,Address:13.212.195.103,},NodeAddress{Type:Hostname,Address:ip-172-20-37-90.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-90.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-212-195-103.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec216e9b184e3e44fb8ed6af9b651047,SystemUUID:ec216e9b-184e-3e44-fb8e-d6af9b651047,BootID:0bbb1eb8-60c7-4bb1-b8c7-bb110f238f78,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0908a80c21068b13b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0908a80c21068b13b,DevicePath:,},},Config:nil,},} Dec 2 09:26:40.027: INFO: Logging kubelet events for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:26:40.241: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:26:40.458: INFO: ebs-csi-node-vswvn started at 2022-12-02 09:02:04 +0000 UTC (0+3 container statuses recorded) Dec 2 09:26:40.458: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:26:40.458: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:26:40.458: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:26:40.458: INFO: ss2-1 started at 2022-12-02 09:26:35 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:40.458: INFO: Container webserver ready: false, restart count 0 Dec 2 09:26:40.458: INFO: ss-1 started at 2022-12-02 09:20:47 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:40.458: INFO: Container webserver ready: false, restart count 0 Dec 2 09:26:40.458: INFO: pod-495720a9-8bc3-4a1a-9b06-801aee782586 started at 2022-12-02 09:26:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:40.458: INFO: Container write-pod ready: true, restart count 0 Dec 2 09:26:40.458: INFO: hostexec-ip-172-20-37-90.ap-southeast-1.compute.internal-79gx9 started at 2022-12-02 09:26:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:40.458: INFO: Container agnhost-container ready: true, restart count 0 Dec 2 09:26:40.458: INFO: netserver-1 started at 2022-12-02 09:26:13 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:40.458: INFO: Container webserver ready: true, restart count 0 Dec 2 09:26:40.458: INFO: alpine-nnp-true-86683c83-8e26-4b51-bc50-ae6e9aa7048f started at <nil> (0+0 container statuses recorded) Dec 2 09:26:40.458: INFO: coredns-autoscaler-85fcbbb64-kb6k7 started at 2022-12-02 09:02:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:40.458: INFO: Container autoscaler ready: true, restart count 0 Dec 2 09:26:40.458: INFO: pod-c327965f-cb67-40f6-bb65-1250450d0965 started at 2022-12-02 09:26:22 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:40.458: INFO: Container write-pod ready: true, restart count 0 Dec 2 09:26:40.458: INFO: externalip-test-4wm8n started at <nil> (0+0 container statuses recorded) Dec 2 09:26:40.458: INFO: kube-proxy-ip-172-20-37-90.ap-southeast-1.compute.internal started at 2022-12-02 09:01:54 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:40.458: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:26:40.458: INFO: calico-node-cqg7n started at 2022-12-02 09:02:04 +0000 UTC (4+1 container statuses recorded) Dec 2 09:26:40.458: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:26:40.458: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:26:40.458: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:26:40.458: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:26:40.458: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:26:40.458: INFO: hostexec-ip-172-20-37-90.ap-southeast-1.compute.internal-c8jdd started at 2022-12-02 09:26:00 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:40.458: INFO: Container agnhost-container ready: true, restart count 0 Dec 2 09:26:40.458: INFO: coredns-5556cb978d-pztr5 started at 2022-12-02 09:02:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:40.458: INFO: Container coredns ready: true, restart count 0 Dec 2 09:26:40.458: INFO: hostexec-ip-172-20-37-90.ap-southeast-1.compute.internal-96dwz started at 2022-12-02 09:25:50 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:40.458: INFO: Container agnhost-container ready: true, restart count 0 Dec 2 09:26:40.458: INFO: success started at 2022-12-02 09:26:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:40.458: INFO: Container success ready: false, restart count 0 Dec 2 09:26:41.409: INFO: Latency metrics for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:26:41.409: INFO: Logging node info for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:26:41.621: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-49-67.ap-southeast-1.compute.internal 81600d2c-3d2a-4421-913e-e1c53c1ad1df 46616 0 2022-12-02 09:02:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a io.kubernetes.storage.mock/node:some-mock-node kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-49-67.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-49-67.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-1659":"ip-172-20-49-67.ap-southeast-1.compute.internal","ebs.csi.aws.com":"i-056f60b74d454bea7"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.49.67/19 projectcalico.org/IPv4IPIPTunnelAddr:100.106.24.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:03:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:26:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-056f60b74d454bea7,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:19 +0000 UTC,LastTransitionTime:2022-12-02 09:03:19 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:26:32 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:26:32 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:26:32 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:26:32 +0000 UTC,LastTransitionTime:2022-12-02 09:03:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.49.67,},NodeAddress{Type:ExternalIP,Address:13.228.79.89,},NodeAddress{Type:Hostname,Address:ip-172-20-49-67.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-49-67.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-228-79-89.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2bd833fc2a274ccf3bf225f245ddce,SystemUUID:ec2bd833-fc2a-274c-cf3b-f225f245ddce,BootID:1ab59414-4d0c-4bc8-bb64-5f41a1b02c74,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[docker.io/library/nginx@sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba docker.io/library/nginx:latest],SizeBytes:56833911,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:dd6d57960dc104a4ee0fa7c58c6faa3e38725561af374c17f8cb905f7f73ba66 k8s.gcr.io/build-image/debian-iptables:bullseye-v1.1.0],SizeBytes:27059231,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:26:41.622: INFO: Logging kubelet events for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:26:41.839: INFO: Logging pods the kubelet thinks is on node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:26:42.057: INFO: private started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:42.058: INFO: Container cntr ready: true, restart count 0 Dec 2 09:26:42.058: INFO: slave started at 2022-12-02 09:19:22 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:42.058: INFO: Container cntr ready: true, restart count 0 Dec 2 09:26:42.058: INFO: calico-node-n6lj9 started at 2022-12-02 09:02:20 +0000 UTC (4+1 container statuses recorded) Dec 2 09:26:42.058: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:26:42.058: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:26:42.058: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:26:42.058: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:26:42.058: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:26:42.058: INFO: master started at 2022-12-02 09:19:13 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:42.058: INFO: Container cntr ready: true, restart count 0 Dec 2 09:26:42.058: INFO: csi-hostpathplugin-0 started at 2022-12-02 09:26:00 +0000 UTC (0+7 container statuses recorded) Dec 2 09:26:42.058: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:26:42.058: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:26:42.058: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:26:42.058: INFO: Container csi-snapshotter ready: true, restart count 0 Dec 2 09:26:42.058: INFO: Container hostpath ready: true, restart count 0 Dec 2 09:26:42.058: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:26:42.058: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:26:42.058: INFO: pod-subpath-test-inlinevolume-jgst started at 2022-12-02 09:26:36 +0000 UTC (1+1 container statuses recorded) Dec 2 09:26:42.058: INFO: Init container init-volume-inlinevolume-jgst ready: true, restart count 0 Dec 2 09:26:42.058: INFO: Container test-container-subpath-inlinevolume-jgst ready: true, restart count 0 Dec 2 09:26:42.058: INFO: kube-proxy-ip-172-20-49-67.ap-southeast-1.compute.internal started at 2022-12-02 09:01:59 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:42.058: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:26:42.058: INFO: hostexec-ip-172-20-49-67.ap-southeast-1.compute.internal-vjf4n started at 2022-12-02 09:26:17 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:42.058: INFO: Container agnhost-container ready: true, restart count 0 Dec 2 09:26:42.058: INFO: ebs-csi-node-w9kzj started at 2022-12-02 09:02:20 +0000 UTC (0+3 container statuses recorded) Dec 2 09:26:42.058: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:26:42.058: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:26:42.058: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:26:42.058: INFO: default started at 2022-12-02 09:20:44 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:42.058: INFO: Container cntr ready: true, restart count 0 Dec 2 09:26:42.058: INFO: netserver-2 started at 2022-12-02 09:26:13 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:42.058: INFO: Container webserver ready: true, restart count 0 Dec 2 09:26:42.821: INFO: Latency metrics for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:26:42.821: INFO: Logging node info for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:26:43.035: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-55-194.ap-southeast-1.compute.internal 890854e9-f510-402d-9886-49c1d41318f4 42325 0 2022-12-02 09:00:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:master-ap-southeast-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-55-194.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-00b46fae03d775a19"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.55.194/19 projectcalico.org/IPv4IPIPTunnelAddr:100.104.201.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-02 09:00:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2022-12-02 09:01:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2022-12-02 09:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2022-12-02 09:01:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {Go-http-client Update v1 2022-12-02 09:02:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:02:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-00b46fae03d775a19,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3894931456 0} {<nil>} 3803644Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790073856 0} {<nil>} 3701244Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:02:00 +0000 UTC,LastTransitionTime:2022-12-02 09:02:00 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:22:52 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:22:52 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:22:52 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:22:52 +0000 UTC,LastTransitionTime:2022-12-02 09:01:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.194,},NodeAddress{Type:ExternalIP,Address:54.169.84.77,},NodeAddress{Type:Hostname,Address:ip-172-20-55-194.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-55-194.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-169-84-77.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2521391aeba8d2805b54ac578aa7d0,SystemUUID:ec252139-1aeb-a8d2-805b-54ac578aa7d0,BootID:4e785fe8-5068-4fd6-b8b0-5a4aae03c815,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:256a64fb44876d270f04ada1afd3ca431341f249aa52cbe2b3780f8f23961142 registry.k8s.io/etcdadm/etcd-manager:v3.0.20220727],SizeBytes:216364516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.14],SizeBytes:136567243,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.14],SizeBytes:126380852,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.14],SizeBytes:54860595,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:58cc91c551e9e941a752e205eefed1c8da56f97a51e054b3d341b67bb7bf27eb docker.io/calico/kube-controllers:v3.23.5],SizeBytes:53774679,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.24.5],SizeBytes:41269276,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.24.5],SizeBytes:40816784,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.24.5],SizeBytes:5130223,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:26:43.035: INFO: Logging kubelet events for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:26:43.251: INFO: Logging pods the kubelet thinks is on node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:26:43.476: INFO: kube-scheduler-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:43.476: INFO: Container kube-scheduler ready: true, restart count 0 Dec 2 09:26:43.476: INFO: calico-node-xfrb9 started at 2022-12-02 09:01:32 +0000 UTC (4+1 container statuses recorded) Dec 2 09:26:43.476: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:26:43.476: INFO: Init container install-cni ready: true, restart count 0 Dec 2 09:26:43.476: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:26:43.476: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:26:43.476: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:26:43.476: INFO: kops-controller-7l85j started at 2022-12-02 09:01:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:43.476: INFO: Container kops-controller ready: true, restart count 0 Dec 2 09:26:43.476: INFO: etcd-manager-events-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:43.476: INFO: Container etcd-manager ready: true, restart count 0 Dec 2 09:26:43.476: INFO: etcd-manager-main-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:43.476: INFO: Container etcd-manager ready: true, restart count 0 Dec 2 09:26:43.476: INFO: kube-apiserver-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+2 container statuses recorded) Dec 2 09:26:43.476: INFO: Container healthcheck ready: true, restart count 0 Dec 2 09:26:43.476: INFO: Container kube-apiserver ready: true, restart count 1 Dec 2 09:26:43.476: INFO: kube-controller-manager-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:43.476: INFO: Container kube-controller-manager ready: true, restart count 2 Dec 2 09:26:43.476: INFO: kube-proxy-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:43.476: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:26:43.476: INFO: ebs-csi-controller-55c8659c7c-sqq7m started at 2022-12-02 09:01:32 +0000 UTC (0+5 container statuses recorded) Dec 2 09:26:43.476: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:26:43.476: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:26:43.476: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:26:43.476: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:26:43.476: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:26:43.476: INFO: ebs-csi-node-rfwfq started at 2022-12-02 09:01:32 +0000 UTC (0+3 container statuses recorded) Dec 2 09:26:43.476: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:26:43.476: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:26:43.476: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:26:43.476: INFO: dns-controller-847484c97f-z8rs4 started at 2022-12-02 09:01:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:43.476: INFO: Container dns-controller ready: true, restart count 0 Dec 2 09:26:43.476: INFO: calico-kube-controllers-795c657547-9mz5t started at 2022-12-02 09:01:48 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:43.476: INFO: Container calico-kube-controllers ready: true, restart count 0 Dec 2 09:26:44.176: INFO: Latency metrics for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:26:44.176: INFO: Logging node info for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:26:44.387: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-60-164.ap-southeast-1.compute.internal 4d06e01c-27c4-4c2f-b118-647413c7ddf6 46422 0 2022-12-02 09:02:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-60-164.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-60-164.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-7891":"ip-172-20-60-164.ap-southeast-1.compute.internal","ebs.csi.aws.com":"i-0a7cd257efff997b0"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.60.164/19 projectcalico.org/IPv4IPIPTunnelAddr:100.106.61.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:02:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-02 09:26:13 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-12-02 09:26:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-0a7cd257efff997b0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:11 +0000 UTC,LastTransitionTime:2022-12-02 09:03:11 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:26:26 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:26:26 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:26:26 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:26:26 +0000 UTC,LastTransitionTime:2022-12-02 09:02:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.60.164,},NodeAddress{Type:ExternalIP,Address:13.212.105.239,},NodeAddress{Type:Hostname,Address:ip-172-20-60-164.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-60-164.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-212-105-239.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ab9d0d1126900acfd3b82032bd9b,SystemUUID:ec28ab9d-0d11-2690-0acf-d3b82032bd9b,BootID:925eb9d6-3c66-49ad-be43-0411968ca10c,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[docker.io/library/nginx@sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba docker.io/library/nginx:latest],SizeBytes:56833911,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-ephemeral-7891^6299b92f-7223-11ed-aa20-1e8e5420d7a5 kubernetes.io/csi/ebs.csi.aws.com^vol-03538f76b5eded3b1],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-ephemeral-7891^6299b92f-7223-11ed-aa20-1e8e5420d7a5,DevicePath:,},AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-03538f76b5eded3b1,DevicePath:,},},Config:nil,},} Dec 2 09:26:44.387: INFO: Logging kubelet events for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:26:44.600: INFO: Logging pods the kubelet thinks is on node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:26:44.818: INFO: netserver-3 started at 2022-12-02 09:26:42 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:44.818: INFO: Container webserver ready: false, restart count 0 Dec 2 09:26:44.818: INFO: ss2-2 started at 2022-12-02 09:26:40 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:44.818: INFO: Container webserver ready: true, restart count 0 Dec 2 09:26:44.818: INFO: pod-c5debc40-fd93-4f6b-89b4-717a0ecee491 started at 2022-12-02 09:26:10 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:44.818: INFO: Container write-pod ready: true, restart count 0 Dec 2 09:26:44.818: INFO: netserver-3 started at 2022-12-02 09:26:14 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:44.818: INFO: Container webserver ready: true, restart count 0 Dec 2 09:26:44.818: INFO: kube-proxy-ip-172-20-60-164.ap-southeast-1.compute.internal started at 2022-12-02 09:01:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:44.818: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:26:44.818: INFO: calico-node-gv4lf started at 2022-12-02 09:02:06 +0000 UTC (4+1 container statuses recorded) Dec 2 09:26:44.818: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:26:44.818: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:26:44.818: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:26:44.818: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:26:44.818: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:26:44.818: INFO: inline-volume-tester-5zgpn started at 2022-12-02 09:26:22 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:44.818: INFO: Container csi-volume-tester ready: true, restart count 0 Dec 2 09:26:44.818: INFO: csi-hostpathplugin-0 started at 2022-12-02 09:26:05 +0000 UTC (0+7 container statuses recorded) Dec 2 09:26:44.818: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:26:44.819: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:26:44.819: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:26:44.819: INFO: Container csi-snapshotter ready: true, restart count 0 Dec 2 09:26:44.819: INFO: Container hostpath ready: true, restart count 0 Dec 2 09:26:44.819: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:26:44.819: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:26:44.819: INFO: busybox-a14e9395-54f8-42f6-a8b6-ddd212d98abb started at 2022-12-02 09:26:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:44.819: INFO: Container busybox ready: true, restart count 0 Dec 2 09:26:44.819: INFO: labelsupdate36e15b4c-df40-4fc8-ac2a-fdd6063c0f1f started at 2022-12-02 09:25:57 +0000 UTC (0+1 container statuses recorded) Dec 2 09:26:44.819: INFO: Container client-container ready: true, restart count 0 Dec 2 09:26:44.819: INFO: ebs-csi-node-lrwc5 started at 2022-12-02 09:02:06 +0000 UTC (0+3 container statuses recorded) Dec 2 09:26:44.819: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:26:44.819: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:26:44.819: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:26:45.586: INFO: Latency metrics for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:26:45.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-2522" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sService\sendpoints\slatency\sshould\snot\sbe\svery\shigh\s\s\[Conformance\]$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Dec 2 09:20:43.205: Tail (99 percentile) latency should be less than 50s 50, 90, 99 percentiles: 490.030456ms 793.11841ms 1m11.447673702s /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113from junit_02.xml
[BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Dec 2 09:19:13.976: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename svc-latency �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Dec 2 09:19:15.447: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: creating replication controller svc-latency-rc in namespace svc-latency-1624 I1202 09:19:15.665309 6573 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1624, replica count: 1 I1202 09:19:16.916064 6573 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1202 09:19:17.916384 6573 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1202 09:19:18.917003 6573 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1202 09:19:19.917368 6573 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1202 09:19:20.917659 6573 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1202 09:19:21.918013 6573 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1202 09:19:22.918344 6573 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1202 09:19:23.918862 6573 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1202 09:19:24.921297 6573 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 2 09:19:25.455: INFO: Created: latency-svc-wl6nm Dec 2 09:19:25.462: INFO: Got endpoints: latency-svc-wl6nm [228.810145ms] Dec 2 09:19:25.681: INFO: Created: latency-svc-7hgb9 Dec 2 09:19:25.691: INFO: Created: latency-svc-hdqcb Dec 2 09:19:25.705: INFO: Created: latency-svc-tqrqk Dec 2 09:19:25.713: INFO: Got endpoints: latency-svc-hdqcb [249.960985ms] Dec 2 09:19:25.713: INFO: Got endpoints: latency-svc-7hgb9 [250.523949ms] Dec 2 09:19:25.744: INFO: Got endpoints: latency-svc-tqrqk [281.426416ms] Dec 2 09:19:25.745: INFO: Created: latency-svc-bzgpc Dec 2 09:19:25.764: INFO: Created: latency-svc-fzrz2 Dec 2 09:19:25.773: INFO: Got endpoints: latency-svc-fzrz2 [310.841814ms] Dec 2 09:19:25.774: INFO: Got endpoints: latency-svc-bzgpc [311.038707ms] Dec 2 09:19:25.892: INFO: Created: latency-svc-qk6k9 Dec 2 09:19:25.901: INFO: Got endpoints: latency-svc-qk6k9 [438.381693ms] Dec 2 09:19:25.903: INFO: Created: latency-svc-9vxg2 Dec 2 09:19:25.910: INFO: Got endpoints: latency-svc-9vxg2 [447.330176ms] Dec 2 09:19:25.911: INFO: Created: latency-svc-xpd9c Dec 2 09:19:25.919: INFO: Created: latency-svc-ls8fr Dec 2 09:19:25.923: INFO: Got endpoints: latency-svc-xpd9c [460.708756ms] Dec 2 09:19:25.926: INFO: Got endpoints: latency-svc-ls8fr [463.159988ms] Dec 2 09:19:25.927: INFO: Created: latency-svc-xw777 Dec 2 09:19:25.934: INFO: Created: latency-svc-8x2gm Dec 2 09:19:25.936: INFO: Got endpoints: latency-svc-xw777 [473.755946ms] Dec 2 09:19:25.947: INFO: Got endpoints: latency-svc-8x2gm [234.444242ms] Dec 2 09:19:25.951: INFO: Created: latency-svc-v4j4r Dec 2 09:19:25.957: INFO: Got endpoints: latency-svc-v4j4r [244.520241ms] Dec 2 09:19:25.963: INFO: Created: latency-svc-65554 Dec 2 09:19:25.967: INFO: Got endpoints: latency-svc-65554 [504.015309ms] Dec 2 09:19:25.971: INFO: Created: latency-svc-8ms8x Dec 2 09:19:25.980: INFO: Got endpoints: latency-svc-8ms8x [517.187081ms] Dec 2 09:19:25.981: INFO: Created: latency-svc-f4bmx Dec 2 09:19:25.983: INFO: Got endpoints: latency-svc-f4bmx [520.719467ms] Dec 2 09:19:25.988: INFO: Created: latency-svc-8vfqz Dec 2 09:19:25.995: INFO: Got endpoints: latency-svc-8vfqz [532.866288ms] Dec 2 09:19:26.001: INFO: Created: latency-svc-7mx46 Dec 2 09:19:26.008: INFO: Got endpoints: latency-svc-7mx46 [545.4269ms] Dec 2 09:19:26.011: INFO: Created: latency-svc-kzzzp Dec 2 09:19:26.016: INFO: Created: latency-svc-9wxh8 Dec 2 09:19:26.016: INFO: Got endpoints: latency-svc-kzzzp [271.720022ms] Dec 2 09:19:26.022: INFO: Got endpoints: latency-svc-9wxh8 [248.678382ms] Dec 2 09:19:26.023: INFO: Created: latency-svc-r56wn Dec 2 09:19:26.031: INFO: Got endpoints: latency-svc-r56wn [257.488444ms] Dec 2 09:19:26.126: INFO: Created: latency-svc-qjps7 Dec 2 09:19:26.128: INFO: Got endpoints: latency-svc-qjps7 [226.355439ms] Dec 2 09:19:26.136: INFO: Created: latency-svc-q26g9 Dec 2 09:19:26.146: INFO: Got endpoints: latency-svc-q26g9 [235.677081ms] Dec 2 09:19:26.153: INFO: Created: latency-svc-4nd9r Dec 2 09:19:26.167: INFO: Got endpoints: latency-svc-4nd9r [243.035096ms] Dec 2 09:19:26.169: INFO: Created: latency-svc-pm97q Dec 2 09:19:26.182: INFO: Got endpoints: latency-svc-pm97q [255.602658ms] Dec 2 09:19:26.185: INFO: Created: latency-svc-sksjj Dec 2 09:19:26.192: INFO: Got endpoints: latency-svc-sksjj [255.231712ms] Dec 2 09:19:26.227: INFO: Created: latency-svc-ndp2t Dec 2 09:19:26.234: INFO: Got endpoints: latency-svc-ndp2t [276.182371ms] Dec 2 09:19:26.244: INFO: Created: latency-svc-wl2vs Dec 2 09:19:26.245: INFO: Created: latency-svc-gxrsj Dec 2 09:19:26.264: INFO: Created: latency-svc-8b7mx Dec 2 09:19:26.264: INFO: Created: latency-svc-kdgn9 Dec 2 09:19:26.265: INFO: Got endpoints: latency-svc-gxrsj [317.709429ms] Dec 2 09:19:26.265: INFO: Got endpoints: latency-svc-wl2vs [298.530947ms] Dec 2 09:19:26.275: INFO: Got endpoints: latency-svc-kdgn9 [294.86977ms] Dec 2 09:19:26.276: INFO: Got endpoints: latency-svc-8b7mx [292.729407ms] Dec 2 09:19:26.295: INFO: Created: latency-svc-pjnv7 Dec 2 09:19:26.295: INFO: Created: latency-svc-942h9 Dec 2 09:19:26.295: INFO: Created: latency-svc-t9rgf Dec 2 09:19:26.295: INFO: Created: latency-svc-vpt8t Dec 2 09:19:26.295: INFO: Created: latency-svc-8vsnw Dec 2 09:19:26.295: INFO: Got endpoints: latency-svc-8vsnw [287.218787ms] Dec 2 09:19:26.300: INFO: Got endpoints: latency-svc-942h9 [305.007327ms] Dec 2 09:19:26.301: INFO: Got endpoints: latency-svc-vpt8t [269.244283ms] Dec 2 09:19:26.303: INFO: Got endpoints: latency-svc-t9rgf [286.860881ms] Dec 2 09:19:26.303: INFO: Got endpoints: latency-svc-pjnv7 [280.788475ms] Dec 2 09:19:26.384: INFO: Created: latency-svc-g95v9 Dec 2 09:19:26.416: INFO: Created: latency-svc-4vfmx Dec 2 09:19:26.417: INFO: Got endpoints: latency-svc-g95v9 [288.95413ms] Dec 2 09:19:26.443: INFO: Created: latency-svc-6hf4d Dec 2 09:19:26.460: INFO: Got endpoints: latency-svc-4vfmx [314.133058ms] Dec 2 09:19:26.460: INFO: Got endpoints: latency-svc-6hf4d [293.424573ms] Dec 2 09:19:26.494: INFO: Created: latency-svc-pg8dn Dec 2 09:19:26.540: INFO: Got endpoints: latency-svc-pg8dn [358.456138ms] Dec 2 09:19:26.551: INFO: Created: latency-svc-h5tw5 Dec 2 09:19:26.572: INFO: Created: latency-svc-lzhhx Dec 2 09:19:26.608: INFO: Got endpoints: latency-svc-lzhhx [415.672933ms] Dec 2 09:19:26.621: INFO: Created: latency-svc-sqk5l Dec 2 09:19:26.630: INFO: Got endpoints: latency-svc-h5tw5 [395.714082ms] Dec 2 09:19:26.644: INFO: Got endpoints: latency-svc-sqk5l [378.93954ms] Dec 2 09:19:26.654: INFO: Created: latency-svc-xldxb Dec 2 09:19:26.688: INFO: Got endpoints: latency-svc-xldxb [422.327841ms] Dec 2 09:19:26.695: INFO: Created: latency-svc-n8tnx Dec 2 09:19:26.702: INFO: Got endpoints: latency-svc-n8tnx [426.824105ms] Dec 2 09:19:26.710: INFO: Created: latency-svc-jm88l Dec 2 09:19:26.714: INFO: Got endpoints: latency-svc-jm88l [437.409604ms] Dec 2 09:19:26.717: INFO: Created: latency-svc-w6749 Dec 2 09:19:26.725: INFO: Got endpoints: latency-svc-w6749 [429.824929ms] Dec 2 09:19:26.727: INFO: Created: latency-svc-dm86t Dec 2 09:19:26.734: INFO: Created: latency-svc-8k84c Dec 2 09:19:26.735: INFO: Got endpoints: latency-svc-dm86t [431.769161ms] Dec 2 09:19:26.742: INFO: Got endpoints: latency-svc-8k84c [441.198622ms] Dec 2 09:19:26.746: INFO: Created: latency-svc-9jqbh Dec 2 09:19:26.751: INFO: Got endpoints: latency-svc-9jqbh [450.829724ms] Dec 2 09:19:26.758: INFO: Created: latency-svc-6hj5c Dec 2 09:19:26.767: INFO: Created: latency-svc-cvvzj Dec 2 09:19:26.772: INFO: Created: latency-svc-z6w2k Dec 2 09:19:26.779: INFO: Created: latency-svc-tw85z Dec 2 09:19:26.785: INFO: Created: latency-svc-gxgpv Dec 2 09:19:26.801: INFO: Got endpoints: latency-svc-6hj5c [498.020607ms] Dec 2 09:19:26.825: INFO: Created: latency-svc-c7pb2 Dec 2 09:19:26.854: INFO: Got endpoints: latency-svc-cvvzj [437.692873ms] Dec 2 09:19:26.859: INFO: Created: latency-svc-dkcgv Dec 2 09:19:26.873: INFO: Created: latency-svc-6nd6n Dec 2 09:19:26.897: INFO: Got endpoints: latency-svc-z6w2k [436.534849ms] Dec 2 09:19:26.911: INFO: Created: latency-svc-x4fcn Dec 2 09:19:26.921: INFO: Created: latency-svc-2lxd4 Dec 2 09:19:26.930: INFO: Created: latency-svc-lz29c Dec 2 09:19:26.939: INFO: Created: latency-svc-nmzqs Dec 2 09:19:26.948: INFO: Got endpoints: latency-svc-tw85z [487.404757ms] Dec 2 09:19:26.960: INFO: Created: latency-svc-tfgkn Dec 2 09:19:26.966: INFO: Created: latency-svc-828db Dec 2 09:19:26.973: INFO: Created: latency-svc-4srvp Dec 2 09:19:26.996: INFO: Got endpoints: latency-svc-gxgpv [456.262351ms] Dec 2 09:19:27.022: INFO: Created: latency-svc-fnq49 Dec 2 09:19:27.047: INFO: Got endpoints: latency-svc-c7pb2 [439.25578ms] Dec 2 09:19:27.072: INFO: Created: latency-svc-qxc4w Dec 2 09:19:27.112: INFO: Created: latency-svc-f8prj Dec 2 09:19:27.135: INFO: Got endpoints: latency-svc-dkcgv [505.623938ms] Dec 2 09:19:27.147: INFO: Got endpoints: latency-svc-6nd6n [502.483671ms] Dec 2 09:19:27.168: INFO: Created: latency-svc-9ddjf Dec 2 09:19:27.197: INFO: Got endpoints: latency-svc-x4fcn [509.193351ms] Dec 2 09:19:27.213: INFO: Created: latency-svc-g7ch8 Dec 2 09:19:27.247: INFO: Got endpoints: latency-svc-2lxd4 [544.938518ms] Dec 2 09:19:27.269: INFO: Created: latency-svc-csml7 Dec 2 09:19:27.303: INFO: Got endpoints: latency-svc-lz29c [589.286478ms] Dec 2 09:19:27.348: INFO: Got endpoints: latency-svc-nmzqs [622.532983ms] Dec 2 09:19:27.399: INFO: Got endpoints: latency-svc-tfgkn [663.977356ms] Dec 2 09:19:27.455: INFO: Got endpoints: latency-svc-828db [713.08912ms] Dec 2 09:19:27.464: INFO: Created: latency-svc-fj724 Dec 2 09:19:27.478: INFO: Created: latency-svc-dchlr Dec 2 09:19:27.487: INFO: Created: latency-svc-bvc7r Dec 2 09:19:27.490: INFO: Created: latency-svc-92vjk Dec 2 09:19:27.497: INFO: Got endpoints: latency-svc-4srvp [745.047279ms] Dec 2 09:19:27.519: INFO: Created: latency-svc-8jxdn Dec 2 09:19:27.561: INFO: Got endpoints: latency-svc-fnq49 [760.283636ms] Dec 2 09:19:27.570: INFO: Created: latency-svc-27ghw Dec 2 09:19:27.599: INFO: Got endpoints: latency-svc-qxc4w [744.547178ms] Dec 2 09:19:27.622: INFO: Created: latency-svc-699jp Dec 2 09:19:27.652: INFO: Got endpoints: latency-svc-f8prj [755.637503ms] Dec 2 09:19:27.673: INFO: Created: latency-svc-ktnw9 Dec 2 09:19:27.696: INFO: Got endpoints: latency-svc-9ddjf [748.346629ms] Dec 2 09:19:27.721: INFO: Created: latency-svc-wzgr2 Dec 2 09:19:27.750: INFO: Got endpoints: latency-svc-g7ch8 [753.216882ms] Dec 2 09:19:27.779: INFO: Created: latency-svc-gjgps Dec 2 09:19:27.800: INFO: Got endpoints: latency-svc-csml7 [752.563981ms] Dec 2 09:19:27.816: INFO: Created: latency-svc-wjrkb Dec 2 09:19:27.850: INFO: Got endpoints: latency-svc-fj724 [652.732329ms] Dec 2 09:19:27.872: INFO: Created: latency-svc-bxnbq Dec 2 09:19:27.901: INFO: Got endpoints: latency-svc-dchlr [746.999522ms] Dec 2 09:19:27.918: INFO: Created: latency-svc-gphcq Dec 2 09:19:27.952: INFO: Got endpoints: latency-svc-bvc7r [813.776833ms] Dec 2 09:19:27.978: INFO: Created: latency-svc-wklj4 Dec 2 09:19:28.000: INFO: Got endpoints: latency-svc-92vjk [753.37569ms] Dec 2 09:19:28.022: INFO: Created: latency-svc-sjr4d Dec 2 09:19:28.108: INFO: Got endpoints: latency-svc-8jxdn [805.1925ms] Dec 2 09:19:28.131: INFO: Got endpoints: latency-svc-27ghw [783.315639ms] Dec 2 09:19:28.149: INFO: Created: latency-svc-7zqn7 Dec 2 09:19:28.192: INFO: Got endpoints: latency-svc-699jp [793.11841ms] Dec 2 09:19:28.198: INFO: Created: latency-svc-rltpf Dec 2 09:19:28.236: INFO: Got endpoints: latency-svc-ktnw9 [780.529104ms] Dec 2 09:19:28.279: INFO: Got endpoints: latency-svc-wzgr2 [781.682273ms] Dec 2 09:19:28.290: INFO: Created: latency-svc-ldr8b Dec 2 09:19:28.314: INFO: Got endpoints: latency-svc-gjgps [752.893027ms] Dec 2 09:19:28.327: INFO: Created: latency-svc-g5ddv Dec 2 09:19:28.370: INFO: Got endpoints: latency-svc-wjrkb [770.579174ms] Dec 2 09:19:28.407: INFO: Got endpoints: latency-svc-bxnbq [754.817908ms] Dec 2 09:19:28.414: INFO: Created: latency-svc-x6wmf Dec 2 09:19:28.435: INFO: Created: latency-svc-t27ft Dec 2 09:19:28.460: INFO: Created: latency-svc-5mlcc Dec 2 09:19:28.481: INFO: Got endpoints: latency-svc-gphcq [784.4585ms] Dec 2 09:19:28.497: INFO: Created: latency-svc-gmppv Dec 2 09:19:28.519: INFO: Got endpoints: latency-svc-wklj4 [769.499216ms] Dec 2 09:19:28.550: INFO: Created: latency-svc-qhsp7 Dec 2 09:19:28.551: INFO: Got endpoints: latency-svc-sjr4d [751.094388ms] Dec 2 09:19:28.572: INFO: Created: latency-svc-kzm4j Dec 2 09:19:28.605: INFO: Got endpoints: latency-svc-7zqn7 [754.94468ms] Dec 2 09:19:28.610: INFO: Created: latency-svc-6x9p4 Dec 2 09:19:28.660: INFO: Created: latency-svc-vnjn2 Dec 2 09:19:28.671: INFO: Got endpoints: latency-svc-rltpf [769.98197ms] Dec 2 09:19:28.706: INFO: Got endpoints: latency-svc-ldr8b [754.01721ms] Dec 2 09:19:28.720: INFO: Created: latency-svc-pcpd5 Dec 2 09:19:28.745: INFO: Created: latency-svc-4rr77 Dec 2 09:19:28.755: INFO: Got endpoints: latency-svc-g5ddv [754.427864ms] Dec 2 09:19:28.767: INFO: Created: latency-svc-r9rz9 Dec 2 09:19:28.798: INFO: Got endpoints: latency-svc-x6wmf [689.875508ms] Dec 2 09:19:28.825: INFO: Created: latency-svc-lx874 Dec 2 09:19:28.852: INFO: Got endpoints: latency-svc-t27ft [720.437107ms] Dec 2 09:19:28.889: INFO: Created: latency-svc-c88ng Dec 2 09:19:28.912: INFO: Got endpoints: latency-svc-5mlcc [719.642338ms] Dec 2 09:19:28.931: INFO: Created: latency-svc-w5xk7 Dec 2 09:19:28.949: INFO: Got endpoints: latency-svc-gmppv [712.950082ms] Dec 2 09:19:28.972: INFO: Created: latency-svc-54gf7 Dec 2 09:19:29.003: INFO: Got endpoints: latency-svc-qhsp7 [724.546653ms] Dec 2 09:19:29.026: INFO: Created: latency-svc-44kbj Dec 2 09:19:29.056: INFO: Got endpoints: latency-svc-kzm4j [742.144225ms] Dec 2 09:19:29.074: INFO: Created: latency-svc-tg2rm Dec 2 09:19:29.111: INFO: Got endpoints: latency-svc-6x9p4 [740.811991ms] Dec 2 09:19:29.136: INFO: Created: latency-svc-h27xv Dec 2 09:19:29.153: INFO: Got endpoints: latency-svc-vnjn2 [745.449144ms] Dec 2 09:19:29.168: INFO: Created: latency-svc-pbn9j Dec 2 09:19:29.197: INFO: Got endpoints: latency-svc-pcpd5 [716.521832ms] Dec 2 09:19:29.218: INFO: Created: latency-svc-6r5l2 Dec 2 09:19:29.247: INFO: Got endpoints: latency-svc-4rr77 [727.6719ms] Dec 2 09:19:29.271: INFO: Created: latency-svc-rkcjq Dec 2 09:19:29.298: INFO: Got endpoints: latency-svc-r9rz9 [747.560865ms] Dec 2 09:19:29.328: INFO: Created: latency-svc-85rl2 Dec 2 09:19:29.348: INFO: Got endpoints: latency-svc-lx874 [742.908209ms] Dec 2 09:19:29.374: INFO: Created: latency-svc-qz5nl Dec 2 09:19:29.396: INFO: Got endpoints: latency-svc-c88ng [725.366885ms] Dec 2 09:19:29.421: INFO: Created: latency-svc-k9sgp Dec 2 09:19:29.491: INFO: Got endpoints: latency-svc-w5xk7 [785.234473ms] Dec 2 09:19:29.514: INFO: Got endpoints: latency-svc-54gf7 [759.40503ms] Dec 2 09:19:29.539: INFO: Created: latency-svc-tp64j Dec 2 09:19:29.549: INFO: Got endpoints: latency-svc-44kbj [750.851412ms] Dec 2 09:19:29.554: INFO: Created: latency-svc-jt6b5 Dec 2 09:19:29.573: INFO: Created: latency-svc-6bgqv Dec 2 09:19:29.620: INFO: Got endpoints: latency-svc-tg2rm [768.66129ms] Dec 2 09:19:29.626: INFO: Created: latency-svc-xwbzk Dec 2 09:19:29.668: INFO: Got endpoints: latency-svc-h27xv [755.454054ms] Dec 2 09:19:29.744: INFO: Got endpoints: latency-svc-pbn9j [794.877186ms] Dec 2 09:19:29.780: INFO: Got endpoints: latency-svc-6r5l2 [776.482425ms] Dec 2 09:19:29.780: INFO: Created: latency-svc-7f86n Dec 2 09:19:29.801: INFO: Created: latency-svc-qddpx Dec 2 09:19:29.814: INFO: Got endpoints: latency-svc-rkcjq [757.545507ms] Dec 2 09:19:29.821: INFO: Created: latency-svc-qksst Dec 2 09:19:29.849: INFO: Created: latency-svc-p4f9m Dec 2 09:19:29.857: INFO: Got endpoints: latency-svc-85rl2 [746.08677ms] Dec 2 09:19:29.889: INFO: Created: latency-svc-2xwff Dec 2 09:19:29.899: INFO: Got endpoints: latency-svc-qz5nl [746.237119ms] Dec 2 09:19:29.950: INFO: Got endpoints: latency-svc-k9sgp [753.068485ms] Dec 2 09:19:29.984: INFO: Created: latency-svc-jvr22 Dec 2 09:19:29.997: INFO: Created: latency-svc-xpbdc Dec 2 09:19:30.009: INFO: Got endpoints: latency-svc-tp64j [761.282813ms] Dec 2 09:19:30.030: INFO: Created: latency-svc-k6pgt Dec 2 09:19:30.051: INFO: Got endpoints: latency-svc-jt6b5 [752.642952ms] Dec 2 09:19:30.114: INFO: Created: latency-svc-ct2k9 Dec 2 09:19:30.904: INFO: Got endpoints: latency-svc-6bgqv [1.530407198s] Dec 2 09:20:40.836: INFO: Created: latency-svc-5xl8n Dec 2 09:20:40.871: INFO: Created: latency-svc-xd62r Dec 2 09:20:40.871: INFO: Created: latency-svc-q8zsp Dec 2 09:20:40.871: INFO: Created: latency-svc-s8r9d Dec 2 09:20:40.887: INFO: Got endpoints: latency-svc-xwbzk [1m11.490358219s] Dec 2 09:20:40.939: INFO: Got endpoints: latency-svc-7f86n [1m11.447673702s] Dec 2 09:20:40.949: INFO: Got endpoints: latency-svc-5xl8n [1m11.049653886s] Dec 2 09:20:40.950: INFO: Got endpoints: latency-svc-qddpx [1m11.435141226s] Dec 2 09:20:40.950: INFO: Got endpoints: latency-svc-s8r9d [1m10.898633826s] Dec 2 09:20:40.950: INFO: Got endpoints: latency-svc-qksst [1m11.400939419s] Dec 2 09:20:40.950: INFO: Got endpoints: latency-svc-p4f9m [1m11.329962775s] Dec 2 09:20:40.951: INFO: Got endpoints: latency-svc-2xwff [1m11.28300423s] Dec 2 09:20:40.951: INFO: Got endpoints: latency-svc-xd62r [1m10.940366138s] Dec 2 09:20:40.951: INFO: Got endpoints: latency-svc-jvr22 [1m11.206903263s] Dec 2 09:20:40.949: INFO: Got endpoints: latency-svc-q8zsp [1m10.998529441s] Dec 2 09:20:40.951: INFO: Got endpoints: latency-svc-ct2k9 [1m11.094111872s] Dec 2 09:20:40.951: INFO: Got endpoints: latency-svc-k6pgt [1m11.136815954s] Dec 2 09:20:40.951: INFO: Got endpoints: latency-svc-xpbdc [1m11.17133573s] Dec 2 09:20:41.174: INFO: Created: latency-svc-qg87x Dec 2 09:20:41.174: INFO: Created: latency-svc-wft69 Dec 2 09:20:41.175: INFO: Got endpoints: latency-svc-wft69 [287.414411ms] Dec 2 09:20:41.175: INFO: Got endpoints: latency-svc-qg87x [1m9.176194086s] Dec 2 09:20:41.181: INFO: Created: latency-svc-wf52b Dec 2 09:20:41.183: INFO: Got endpoints: latency-svc-wf52b [244.293672ms] Dec 2 09:20:41.359: INFO: Created: latency-svc-sljwc Dec 2 09:20:41.360: INFO: Created: latency-svc-m26z6 Dec 2 09:20:41.363: INFO: Got endpoints: latency-svc-sljwc [414.1308ms] Dec 2 09:20:41.365: INFO: Got endpoints: latency-svc-m26z6 [413.359584ms] Dec 2 09:20:41.372: INFO: Created: latency-svc-bn87d Dec 2 09:20:41.378: INFO: Got endpoints: latency-svc-bn87d [428.364626ms] Dec 2 09:20:41.386: INFO: Created: latency-svc-f8k9b Dec 2 09:20:41.422: INFO: Got endpoints: latency-svc-f8k9b [471.789373ms] Dec 2 09:20:41.425: INFO: Created: latency-svc-bqfgv Dec 2 09:20:41.427: INFO: Got endpoints: latency-svc-bqfgv [476.302668ms] Dec 2 09:20:41.435: INFO: Created: latency-svc-75p2m Dec 2 09:20:41.442: INFO: Got endpoints: latency-svc-75p2m [491.531853ms] Dec 2 09:20:41.449: INFO: Created: latency-svc-6cptt Dec 2 09:20:41.461: INFO: Created: latency-svc-6tdnn Dec 2 09:20:41.467: INFO: Got endpoints: latency-svc-6cptt [516.670998ms] Dec 2 09:20:41.474: INFO: Created: latency-svc-t95w5 Dec 2 09:20:41.474: INFO: Got endpoints: latency-svc-6tdnn [523.301861ms] Dec 2 09:20:41.482: INFO: Created: latency-svc-vfrzc Dec 2 09:20:41.482: INFO: Got endpoints: latency-svc-t95w5 [530.853907ms] Dec 2 09:20:41.487: INFO: Got endpoints: latency-svc-vfrzc [536.378232ms] Dec 2 09:20:41.492: INFO: Created: latency-svc-g9lh4 Dec 2 09:20:41.501: INFO: Created: latency-svc-jjpqb Dec 2 09:20:41.506: INFO: Got endpoints: latency-svc-g9lh4 [554.59574ms] Dec 2 09:20:41.510: INFO: Got endpoints: latency-svc-jjpqb [559.057315ms] Dec 2 09:20:41.515: INFO: Created: latency-svc-jgc92 Dec 2 09:20:41.531: INFO: Got endpoints: latency-svc-jgc92 [347.713495ms] Dec 2 09:20:41.539: INFO: Created: latency-svc-x6n7f Dec 2 09:20:41.543: INFO: Got endpoints: latency-svc-x6n7f [368.19601ms] Dec 2 09:20:41.551: INFO: Created: latency-svc-fkjn4 Dec 2 09:20:41.557: INFO: Got endpoints: latency-svc-fkjn4 [381.124978ms] Dec 2 09:20:41.607: INFO: Created: latency-svc-9rtch Dec 2 09:20:41.612: INFO: Created: latency-svc-98twg Dec 2 09:20:41.612: INFO: Got endpoints: latency-svc-9rtch [234.202588ms] Dec 2 09:20:41.654: INFO: Got endpoints: latency-svc-98twg [289.996506ms] Dec 2 09:20:41.712: INFO: Created: latency-svc-q68n4 Dec 2 09:20:41.725: INFO: Got endpoints: latency-svc-q68n4 [360.058871ms] Dec 2 09:20:41.745: INFO: Created: latency-svc-bsbvr Dec 2 09:20:41.797: INFO: Got endpoints: latency-svc-bsbvr [375.21551ms] Dec 2 09:20:41.910: INFO: Created: latency-svc-b7vwr Dec 2 09:20:41.910: INFO: Created: latency-svc-9zncg Dec 2 09:20:41.910: INFO: Created: latency-svc-b5htd Dec 2 09:20:41.910: INFO: Created: latency-svc-74wdd Dec 2 09:20:41.910: INFO: Created: latency-svc-qr2xv Dec 2 09:20:41.911: INFO: Created: latency-svc-pgmcd Dec 2 09:20:41.912: INFO: Created: latency-svc-bp752 Dec 2 09:20:41.912: INFO: Created: latency-svc-knsjv Dec 2 09:20:41.923: INFO: Created: latency-svc-947z2 Dec 2 09:20:41.924: INFO: Created: latency-svc-7h7n4 Dec 2 09:20:41.926: INFO: Created: latency-svc-xkcxs Dec 2 09:20:41.929: INFO: Created: latency-svc-qqbdz Dec 2 09:20:41.930: INFO: Created: latency-svc-fx6jq Dec 2 09:20:41.943: INFO: Got endpoints: latency-svc-bp752 [475.823522ms] Dec 2 09:20:41.943: INFO: Got endpoints: latency-svc-b5htd [411.624874ms] Dec 2 09:20:41.949: INFO: Got endpoints: latency-svc-b7vwr [522.428068ms] Dec 2 09:20:41.950: INFO: Got endpoints: latency-svc-qr2xv [475.461844ms] Dec 2 09:20:41.957: INFO: Got endpoints: latency-svc-knsjv [302.338896ms] Dec 2 09:20:41.959: INFO: Got endpoints: latency-svc-947z2 [453.124664ms] Dec 2 09:20:41.977: INFO: Got endpoints: latency-svc-xkcxs [495.620901ms] Dec 2 09:20:41.977: INFO: Got endpoints: latency-svc-7h7n4 [433.832607ms] Dec 2 09:20:41.977: INFO: Got endpoints: latency-svc-qqbdz [489.945171ms] Dec 2 09:20:41.978: INFO: Got endpoints: latency-svc-9zncg [468.12207ms] Dec 2 09:20:41.979: INFO: Got endpoints: latency-svc-fx6jq [421.929799ms] Dec 2 09:20:41.981: INFO: Created: latency-svc-7mb7j Dec 2 09:20:41.981: INFO: Got endpoints: latency-svc-pgmcd [538.759449ms] Dec 2 09:20:41.983: INFO: Got endpoints: latency-svc-74wdd [370.171249ms] Dec 2 09:20:41.988: INFO: Got endpoints: latency-svc-7mb7j [262.941827ms] Dec 2 09:20:42.017: INFO: Created: latency-svc-nngn6 Dec 2 09:20:42.022: INFO: Got endpoints: latency-svc-nngn6 [224.881377ms] Dec 2 09:20:42.167: INFO: Created: latency-svc-mvz9h Dec 2 09:20:42.190: INFO: Got endpoints: latency-svc-mvz9h [247.004326ms] Dec 2 09:20:42.207: INFO: Created: latency-svc-zx8jr Dec 2 09:20:42.224: INFO: Got endpoints: latency-svc-zx8jr [281.207048ms] Dec 2 09:20:42.240: INFO: Created: latency-svc-plgvz Dec 2 09:20:42.255: INFO: Created: latency-svc-vwrhr Dec 2 09:20:42.262: INFO: Got endpoints: latency-svc-plgvz [312.481274ms] Dec 2 09:20:42.278: INFO: Got endpoints: latency-svc-vwrhr [328.642088ms] Dec 2 09:20:42.278: INFO: Created: latency-svc-nzppn Dec 2 09:20:42.293: INFO: Got endpoints: latency-svc-nzppn [336.149817ms] Dec 2 09:20:42.299: INFO: Created: latency-svc-vslwz Dec 2 09:20:42.312: INFO: Got endpoints: latency-svc-vslwz [352.899771ms] Dec 2 09:20:42.318: INFO: Created: latency-svc-kxgm9 Dec 2 09:20:42.322: INFO: Created: latency-svc-7cl6p Dec 2 09:20:42.324: INFO: Got endpoints: latency-svc-kxgm9 [346.199117ms] Dec 2 09:20:42.331: INFO: Got endpoints: latency-svc-7cl6p [352.219141ms] Dec 2 09:20:42.332: INFO: Created: latency-svc-pkk8d Dec 2 09:20:42.341: INFO: Got endpoints: latency-svc-pkk8d [363.257734ms] Dec 2 09:20:42.347: INFO: Created: latency-svc-vrfv5 Dec 2 09:20:42.350: INFO: Got endpoints: latency-svc-vrfv5 [372.709827ms] Dec 2 09:20:42.355: INFO: Created: latency-svc-84xjj Dec 2 09:20:42.362: INFO: Created: latency-svc-74bf7 Dec 2 09:20:42.362: INFO: Got endpoints: latency-svc-84xjj [383.743201ms] Dec 2 09:20:42.370: INFO: Got endpoints: latency-svc-74bf7 [388.686585ms] Dec 2 09:20:42.376: INFO: Created: latency-svc-kqzc6 Dec 2 09:20:42.391: INFO: Created: latency-svc-blnqc Dec 2 09:20:42.398: INFO: Got endpoints: latency-svc-kqzc6 [410.117003ms] Dec 2 09:20:42.407: INFO: Got endpoints: latency-svc-blnqc [424.09928ms] Dec 2 09:20:42.410: INFO: Created: latency-svc-gq47t Dec 2 09:20:42.418: INFO: Got endpoints: latency-svc-gq47t [396.109347ms] Dec 2 09:20:42.420: INFO: Created: latency-svc-8bgv8 Dec 2 09:20:42.423: INFO: Got endpoints: latency-svc-8bgv8 [233.374334ms] Dec 2 09:20:42.442: INFO: Created: latency-svc-dqj9s Dec 2 09:20:42.452: INFO: Got endpoints: latency-svc-dqj9s [227.550053ms] Dec 2 09:20:42.484: INFO: Created: latency-svc-q9pmh Dec 2 09:20:42.490: INFO: Got endpoints: latency-svc-q9pmh [227.580973ms] Dec 2 09:20:42.499: INFO: Created: latency-svc-cqppp Dec 2 09:20:42.504: INFO: Got endpoints: latency-svc-cqppp [226.084938ms] Dec 2 09:20:42.514: INFO: Created: latency-svc-mnfjv Dec 2 09:20:42.528: INFO: Created: latency-svc-6bdj6 Dec 2 09:20:42.543: INFO: Created: latency-svc-xhzgd Dec 2 09:20:42.553: INFO: Got endpoints: latency-svc-mnfjv [259.852885ms] Dec 2 09:20:42.556: INFO: Created: latency-svc-rz7sc Dec 2 09:20:42.565: INFO: Created: latency-svc-8qsvt Dec 2 09:20:42.576: INFO: Created: latency-svc-h52zl Dec 2 09:20:42.590: INFO: Created: latency-svc-t4l4b Dec 2 09:20:42.599: INFO: Got endpoints: latency-svc-6bdj6 [287.31882ms] Dec 2 09:20:42.600: INFO: Created: latency-svc-gx9t2 Dec 2 09:20:42.619: INFO: Created: latency-svc-vwwcj Dec 2 09:20:42.625: INFO: Created: latency-svc-n95h6 Dec 2 09:20:42.643: INFO: Created: latency-svc-d8tgq Dec 2 09:20:42.648: INFO: Created: latency-svc-hl54z Dec 2 09:20:42.651: INFO: Got endpoints: latency-svc-xhzgd [326.972782ms] Dec 2 09:20:42.666: INFO: Created: latency-svc-m8td5 Dec 2 09:20:42.697: INFO: Got endpoints: latency-svc-rz7sc [366.42261ms] Dec 2 09:20:42.708: INFO: Created: latency-svc-97r95 Dec 2 09:20:42.748: INFO: Got endpoints: latency-svc-8qsvt [407.149048ms] Dec 2 09:20:42.800: INFO: Got endpoints: latency-svc-h52zl [449.527359ms] Dec 2 09:20:42.852: INFO: Got endpoints: latency-svc-t4l4b [490.030456ms] Dec 2 09:20:42.899: INFO: Got endpoints: latency-svc-gx9t2 [529.023565ms] Dec 2 09:20:42.952: INFO: Got endpoints: latency-svc-vwwcj [553.641421ms] Dec 2 09:20:42.998: INFO: Got endpoints: latency-svc-n95h6 [590.823593ms] Dec 2 09:20:43.061: INFO: Got endpoints: latency-svc-d8tgq [642.615904ms] Dec 2 09:20:43.101: INFO: Got endpoints: latency-svc-hl54z [677.552974ms] Dec 2 09:20:43.149: INFO: Got endpoints: latency-svc-m8td5 [697.269138ms] Dec 2 09:20:43.203: INFO: Got endpoints: latency-svc-97r95 [713.772995ms] Dec 2 09:20:43.204: INFO: Latencies: [224.881377ms 226.084938ms 226.355439ms 227.550053ms 227.580973ms 233.374334ms 234.202588ms 234.444242ms 235.677081ms 243.035096ms 244.293672ms 244.520241ms 247.004326ms 248.678382ms 249.960985ms 250.523949ms 255.231712ms 255.602658ms 257.488444ms 259.852885ms 262.941827ms 269.244283ms 271.720022ms 276.182371ms 280.788475ms 281.207048ms 281.426416ms 286.860881ms 287.218787ms 287.31882ms 287.414411ms 288.95413ms 289.996506ms 292.729407ms 293.424573ms 294.86977ms 298.530947ms 302.338896ms 305.007327ms 310.841814ms 311.038707ms 312.481274ms 314.133058ms 317.709429ms 326.972782ms 328.642088ms 336.149817ms 346.199117ms 347.713495ms 352.219141ms 352.899771ms 358.456138ms 360.058871ms 363.257734ms 366.42261ms 368.19601ms 370.171249ms 372.709827ms 375.21551ms 378.93954ms 381.124978ms 383.743201ms 388.686585ms 395.714082ms 396.109347ms 407.149048ms 410.117003ms 411.624874ms 413.359584ms 414.1308ms 415.672933ms 421.929799ms 422.327841ms 424.09928ms 426.824105ms 428.364626ms 429.824929ms 431.769161ms 433.832607ms 436.534849ms 437.409604ms 437.692873ms 438.381693ms 439.25578ms 441.198622ms 447.330176ms 449.527359ms 450.829724ms 453.124664ms 456.262351ms 460.708756ms 463.159988ms 468.12207ms 471.789373ms 473.755946ms 475.461844ms 475.823522ms 476.302668ms 487.404757ms 489.945171ms 490.030456ms 491.531853ms 495.620901ms 498.020607ms 502.483671ms 504.015309ms 505.623938ms 509.193351ms 516.670998ms 517.187081ms 520.719467ms 522.428068ms 523.301861ms 529.023565ms 530.853907ms 532.866288ms 536.378232ms 538.759449ms 544.938518ms 545.4269ms 553.641421ms 554.59574ms 559.057315ms 589.286478ms 590.823593ms 622.532983ms 642.615904ms 652.732329ms 663.977356ms 677.552974ms 689.875508ms 697.269138ms 712.950082ms 713.08912ms 713.772995ms 716.521832ms 719.642338ms 720.437107ms 724.546653ms 725.366885ms 727.6719ms 740.811991ms 742.144225ms 742.908209ms 744.547178ms 745.047279ms 745.449144ms 746.08677ms 746.237119ms 746.999522ms 747.560865ms 748.346629ms 750.851412ms 751.094388ms 752.563981ms 752.642952ms 752.893027ms 753.068485ms 753.216882ms 753.37569ms 754.01721ms 754.427864ms 754.817908ms 754.94468ms 755.454054ms 755.637503ms 757.545507ms 759.40503ms 760.283636ms 761.282813ms 768.66129ms 769.499216ms 769.98197ms 770.579174ms 776.482425ms 780.529104ms 781.682273ms 783.315639ms 784.4585ms 785.234473ms 793.11841ms 794.877186ms 805.1925ms 813.776833ms 1.530407198s 1m9.176194086s 1m10.898633826s 1m10.940366138s 1m10.998529441s 1m11.049653886s 1m11.094111872s 1m11.136815954s 1m11.17133573s 1m11.206903263s 1m11.28300423s 1m11.329962775s 1m11.400939419s 1m11.435141226s 1m11.447673702s 1m11.490358219s] Dec 2 09:20:43.204: INFO: 50 %ile: 490.030456ms Dec 2 09:20:43.204: INFO: 90 %ile: 793.11841ms Dec 2 09:20:43.204: INFO: 99 %ile: 1m11.447673702s Dec 2 09:20:43.204: INFO: Total sample count: 200 Dec 2 09:20:43.205: FAIL: Tail (99 percentile) latency should be less than 50s 50, 90, 99 percentiles: 490.030456ms 793.11841ms 1m11.447673702s Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000bd04e0, 0x735d4a0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "svc-latency-1624". �[1mSTEP�[0m: Found 5 events. Dec 2 09:20:43.427: INFO: At 2022-12-02 09:19:15 +0000 UTC - event for svc-latency-rc: {replication-controller } SuccessfulCreate: Created pod: svc-latency-rc-n6rnr Dec 2 09:20:43.427: INFO: At 2022-12-02 09:19:15 +0000 UTC - event for svc-latency-rc-n6rnr: {default-scheduler } Scheduled: Successfully assigned svc-latency-1624/svc-latency-rc-n6rnr to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:43.427: INFO: At 2022-12-02 09:19:17 +0000 UTC - event for svc-latency-rc-n6rnr: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/pause:3.6" already present on machine Dec 2 09:20:43.428: INFO: At 2022-12-02 09:19:17 +0000 UTC - event for svc-latency-rc-n6rnr: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container svc-latency-rc Dec 2 09:20:43.428: INFO: At 2022-12-02 09:19:17 +0000 UTC - event for svc-latency-rc-n6rnr: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container svc-latency-rc Dec 2 09:20:43.639: INFO: POD NODE PHASE GRACE CONDITIONS Dec 2 09:20:43.639: INFO: svc-latency-rc-n6rnr ip-172-20-49-67.ap-southeast-1.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:19:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:19:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:19:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:19:15 +0000 UTC }] Dec 2 09:20:43.639: INFO: Dec 2 09:20:44.066: INFO: Logging node info for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:44.283: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-34-182.ap-southeast-1.compute.internal fd7593c8-1a7c-4e6d-9018-4c36698568dc 38632 0 2022-12-02 09:02:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-34-182.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-34-182.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7299":"csi-mock-csi-mock-volumes-7299","ebs.csi.aws.com":"i-070fdf3c5d5f93304"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.34.182/19 projectcalico.org/IPv4IPIPTunnelAddr:100.116.72.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {Go-http-client Update v1 2022-12-02 09:03:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-070fdf3c5d5f93304,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:22 +0000 UTC,LastTransitionTime:2022-12-02 09:03:22 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:03:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.34.182,},NodeAddress{Type:ExternalIP,Address:54.169.57.14,},NodeAddress{Type:Hostname,Address:ip-172-20-34-182.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-34-182.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-169-57-14.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec264a17458d690f294e12b6a6b2138c,SystemUUID:ec264a17-458d-690f-294e-12b6a6b2138c,BootID:37b6e011-229a-4491-b86f-f149d97d10c0,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:20:44.283: INFO: Logging kubelet events for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:44.524: INFO: Logging pods the kubelet thinks is on node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:44.741: INFO: simpletest.rc-rlzhz started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.741: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.741: INFO: ss2-2 started at 2022-12-02 09:20:43 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.741: INFO: Container webserver ready: false, restart count 0 Dec 2 09:20:44.741: INFO: simpletest.rc-ntn9m started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.741: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.741: INFO: calico-node-xhqfx started at 2022-12-02 09:02:23 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:44.741: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:44.741: INFO: Init container install-cni ready: true, restart count 0 Dec 2 09:20:44.741: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:44.741: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:44.741: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:44.741: INFO: startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c started at 2022-12-02 09:16:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.741: INFO: Container busybox ready: false, restart count 0 Dec 2 09:20:44.741: INFO: test-ss-0 started at 2022-12-02 09:17:26 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.741: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:44.741: INFO: kube-proxy-ip-172-20-34-182.ap-southeast-1.compute.internal started at 2022-12-02 09:02:02 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.741: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:44.741: INFO: ebs-csi-node-4b4zl started at 2022-12-02 09:02:23 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:44.741: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:44.741: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:44.741: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:44.741: INFO: coredns-5556cb978d-bx2m5 started at 2022-12-02 09:03:10 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.741: INFO: Container coredns ready: true, restart count 0 Dec 2 09:20:44.741: INFO: csi-mockplugin-0 started at 2022-12-02 09:18:23 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:44.741: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:44.741: INFO: Container driver-registrar ready: true, restart count 0 Dec 2 09:20:44.741: INFO: Container mock ready: true, restart count 0 Dec 2 09:20:44.741: INFO: simpletest.rc-rptqs started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.741: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.741: INFO: pod-client started at 2022-12-02 09:19:00 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.741: INFO: Container pod-client ready: true, restart count 0 Dec 2 09:20:44.741: INFO: simpletest.rc-w9lsq started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.741: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.741: INFO: simpletest.rc-tfx9v started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.741: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.741: INFO: simpletest.rc-swnct started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.741: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:45.532: INFO: Latency metrics for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:45.532: INFO: Logging node info for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:45.743: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-90.ap-southeast-1.compute.internal f779b12d-0e95-4e7f-929e-368941a29b99 40279 0 2022-12-02 09:02:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-90.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-37-90.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-001dd83f455b4a895"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.37.90/19 projectcalico.org/IPv4IPIPTunnelAddr:100.114.18.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:02:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:19:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-001dd83f455b4a895,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:05 +0000 UTC,LastTransitionTime:2022-12-02 09:03:05 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:02:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.90,},NodeAddress{Type:ExternalIP,Address:13.212.195.103,},NodeAddress{Type:Hostname,Address:ip-172-20-37-90.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-90.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-212-195-103.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec216e9b184e3e44fb8ed6af9b651047,SystemUUID:ec216e9b-184e-3e44-fb8e-d6af9b651047,BootID:0bbb1eb8-60c7-4bb1-b8c7-bb110f238f78,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:20:45.743: INFO: Logging kubelet events for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:45.956: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:46.179: INFO: pod-secrets-0da0406d-ca0f-4f4d-84a5-33a16c483cff started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container secret-volume-test ready: false, restart count 0 Dec 2 09:20:46.180: INFO: pod-terminate-status-0-14 started at 2022-12-02 09:20:41 +0000 UTC (1+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Init container fail ready: false, restart count 0 Dec 2 09:20:46.180: INFO: Container blocked ready: false, restart count 0 Dec 2 09:20:46.180: INFO: execpodws7zw started at 2022-12-02 09:20:42 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container agnhost-container ready: false, restart count 0 Dec 2 09:20:46.180: INFO: simpletest.rc-zj2ft started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.180: INFO: test-webserver-98190dda-eab4-4a0b-a4ec-afbb6264f9c0 started at 2022-12-02 09:18:17 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container test-webserver ready: true, restart count 0 Dec 2 09:20:46.180: INFO: coredns-autoscaler-85fcbbb64-kb6k7 started at 2022-12-02 09:02:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container autoscaler ready: true, restart count 0 Dec 2 09:20:46.180: INFO: simpletest.rc-njxsz started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.180: INFO: kube-proxy-ip-172-20-37-90.ap-southeast-1.compute.internal started at 2022-12-02 09:01:54 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:46.180: INFO: calico-node-cqg7n started at 2022-12-02 09:02:04 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:46.180: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:20:46.180: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:46.180: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:46.180: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:46.180: INFO: httpd started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container httpd ready: false, restart count 0 Dec 2 09:20:46.180: INFO: bin-falseb55ef21a-ef68-4260-9830-7a34a8977c97 started at 2022-12-02 09:20:42 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container bin-falseb55ef21a-ef68-4260-9830-7a34a8977c97 ready: false, restart count 0 Dec 2 09:20:46.180: INFO: simpletest.rc-r9d9b started at 2022-12-02 09:18:34 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.180: INFO: simpletest.rc-t5ztv started at 2022-12-02 09:18:31 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.180: INFO: coredns-5556cb978d-pztr5 started at 2022-12-02 09:02:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container coredns ready: true, restart count 0 Dec 2 09:20:46.180: INFO: simpletest.rc-xqqbd started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.180: INFO: ebs-csi-node-vswvn started at 2022-12-02 09:02:04 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:46.180: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:46.180: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:46.180: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:46.180: INFO: test-ss-1 started at 2022-12-02 09:18:26 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:46.180: INFO: agnhost-primary-dgxqj started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.180: INFO: Container agnhost-primary ready: false, restart count 0 Dec 2 09:20:46.933: INFO: Latency metrics for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:46.933: INFO: Logging node info for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:47.147: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-49-67.ap-southeast-1.compute.internal 81600d2c-3d2a-4421-913e-e1c53c1ad1df 41217 0 2022-12-02 09:02:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a io.kubernetes.storage.mock/node:some-mock-node kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-49-67.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-49-67.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1102":"ip-172-20-49-67.ap-southeast-1.compute.internal","ebs.csi.aws.com":"i-056f60b74d454bea7"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.49.67/19 projectcalico.org/IPv4IPIPTunnelAddr:100.106.24.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:03:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-02 09:18:47 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-12-02 09:18:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-056f60b74d454bea7,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:19 +0000 UTC,LastTransitionTime:2022-12-02 09:03:19 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:03:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.49.67,},NodeAddress{Type:ExternalIP,Address:13.228.79.89,},NodeAddress{Type:Hostname,Address:ip-172-20-49-67.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-49-67.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-228-79-89.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2bd833fc2a274ccf3bf225f245ddce,SystemUUID:ec2bd833-fc2a-274c-cf3b-f225f245ddce,BootID:1ab59414-4d0c-4bc8-bb64-5f41a1b02c74,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[docker.io/library/nginx@sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba docker.io/library/nginx:latest],SizeBytes:56833911,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:dd6d57960dc104a4ee0fa7c58c6faa3e38725561af374c17f8cb905f7f73ba66 k8s.gcr.io/build-image/debian-iptables:bullseye-v1.1.0],SizeBytes:27059231,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-086a725fe4e89b13b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-086a725fe4e89b13b,DevicePath:,},},Config:nil,},} Dec 2 09:20:47.152: INFO: Logging kubelet events for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:47.372: INFO: Logging pods the kubelet thinks is on node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:47.630: INFO: kube-proxy-ip-172-20-49-67.ap-southeast-1.compute.internal started at 2022-12-02 09:01:59 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.630: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:47.630: INFO: ss2-0 started at 2022-12-02 09:19:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.630: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:47.630: INFO: oidc-discovery-validator started at 2022-12-02 09:19:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.630: INFO: Container oidc-discovery-validator ready: false, restart count 0 Dec 2 09:20:47.630: INFO: default started at 2022-12-02 09:20:44 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.630: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:47.630: INFO: ebs-csi-node-w9kzj started at 2022-12-02 09:02:20 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:47.630: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:47.630: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:47.630: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:47.630: INFO: csi-hostpathplugin-0 started at 2022-12-02 09:18:29 +0000 UTC (0+7 container statuses recorded) Dec 2 09:20:47.630: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:20:47.630: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:47.630: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:20:47.630: INFO: Container csi-snapshotter ready: true, restart count 0 Dec 2 09:20:47.630: INFO: Container hostpath ready: true, restart count 0 Dec 2 09:20:47.630: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:47.630: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:47.630: INFO: externalsvc-gfw8b started at 2022-12-02 09:19:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.630: INFO: Container externalsvc ready: true, restart count 0 Dec 2 09:20:47.630: INFO: private started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.630: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:47.630: INFO: master started at 2022-12-02 09:19:13 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.630: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:47.630: INFO: downwardapi-volume-e3f86704-2ad4-4471-80f7-f49d1890acfa started at 2022-12-02 09:20:44 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.630: INFO: Container client-container ready: false, restart count 0 Dec 2 09:20:47.630: INFO: simpletest.rc-xt5qf started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.630: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:47.630: INFO: slave started at 2022-12-02 09:19:22 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.630: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:47.630: INFO: ss-0 started at 2022-12-02 09:19:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.630: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:47.630: INFO: svc-latency-rc-n6rnr started at 2022-12-02 09:19:15 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:47.630: INFO: Container svc-latency-rc ready: true, restart count 0 Dec 2 09:20:47.630: INFO: calico-node-n6lj9 started at 2022-12-02 09:02:20 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:47.630: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:47.630: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:20:47.630: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:47.630: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:47.630: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:48.679: INFO: Latency metrics for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:48.679: INFO: Logging node info for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:48.899: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-55-194.ap-southeast-1.compute.internal 890854e9-f510-402d-9886-49c1d41318f4 34763 0 2022-12-02 09:00:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:master-ap-southeast-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-55-194.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-00b46fae03d775a19"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.55.194/19 projectcalico.org/IPv4IPIPTunnelAddr:100.104.201.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-02 09:00:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2022-12-02 09:01:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2022-12-02 09:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2022-12-02 09:01:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {Go-http-client Update v1 2022-12-02 09:02:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:02:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-00b46fae03d775a19,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3894931456 0} {<nil>} 3803644Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790073856 0} {<nil>} 3701244Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:02:00 +0000 UTC,LastTransitionTime:2022-12-02 09:02:00 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:01:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.194,},NodeAddress{Type:ExternalIP,Address:54.169.84.77,},NodeAddress{Type:Hostname,Address:ip-172-20-55-194.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-55-194.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-169-84-77.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2521391aeba8d2805b54ac578aa7d0,SystemUUID:ec252139-1aeb-a8d2-805b-54ac578aa7d0,BootID:4e785fe8-5068-4fd6-b8b0-5a4aae03c815,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:256a64fb44876d270f04ada1afd3ca431341f249aa52cbe2b3780f8f23961142 registry.k8s.io/etcdadm/etcd-manager:v3.0.20220727],SizeBytes:216364516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.14],SizeBytes:136567243,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.14],SizeBytes:126380852,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.14],SizeBytes:54860595,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:58cc91c551e9e941a752e205eefed1c8da56f97a51e054b3d341b67bb7bf27eb docker.io/calico/kube-controllers:v3.23.5],SizeBytes:53774679,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.24.5],SizeBytes:41269276,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.24.5],SizeBytes:40816784,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.24.5],SizeBytes:5130223,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:20:48.921: INFO: Logging kubelet events for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:49.159: INFO: Logging pods the kubelet thinks is on node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:53.491: INFO: etcd-manager-events-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:53.561: INFO: Container etcd-manager ready: true, restart count 0 Dec 2 09:20:53.569: INFO: etcd-manager-main-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:53.587: INFO: Container etcd-manager ready: true, restart count 0 Dec 2 09:20:53.587: INFO: kube-apiserver-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+2 container statuses recorded) Dec 2 09:20:53.587: INFO: Container healthcheck ready: true, restart count 0 Dec 2 09:20:53.587: INFO: Container kube-apiserver ready: true, restart count 1 Dec 2 09:20:53.587: INFO: kube-controller-manager-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:53.587: INFO: Container kube-controller-manager ready: true, restart count 2 Dec 2 09:20:53.587: INFO: kube-proxy-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:53.587: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:53.587: INFO: kube-scheduler-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:53.587: INFO: Container kube-scheduler ready: true, restart count 0 Dec 2 09:20:53.587: INFO: calico-node-xfrb9 started at 2022-12-02 09:01:32 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:53.587: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:53.590: INFO: Init container install-cni ready: true, restart count 0 Dec 2 09:20:53.590: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:53.590: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:53.590: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:53.590: INFO: kops-controller-7l85j started at 2022-12-02 09:01:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:53.590: INFO: Container kops-controller ready: true, restart count 0 Dec 2 09:20:53.590: INFO: ebs-csi-controller-55c8659c7c-sqq7m started at 2022-12-02 09:01:32 +0000 UTC (0+5 container statuses recorded) Dec 2 09:20:53.590: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:20:53.590: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:53.590: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:20:53.590: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:53.590: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:53.590: INFO: ebs-csi-node-rfwfq started at 2022-12-02 09:01:32 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:53.590: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:53.590: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:53.590: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:53.590: INFO: dns-controller-847484c97f-z8rs4 started at 2022-12-02 09:01:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:53.590: INFO: Container dns-controller ready: true, restart count 0 Dec 2 09:20:53.590: INFO: calico-kube-controllers-795c657547-9mz5t started at 2022-12-02 09:01:48 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:53.590: INFO: Container calico-kube-controllers ready: true, restart count 0 Dec 2 09:25:48.382: INFO: Latency metrics for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:25:48.383: INFO: Logging node info for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:25:48.594: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-60-164.ap-southeast-1.compute.internal 4d06e01c-27c4-4c2f-b118-647413c7ddf6 42659 0 2022-12-02 09:02:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-60-164.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-60-164.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9857":"ip-172-20-60-164.ap-southeast-1.compute.internal","ebs.csi.aws.com":"i-0a7cd257efff997b0"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.60.164/19 projectcalico.org/IPv4IPIPTunnelAddr:100.106.61.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:02:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-02 09:17:54 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-12-02 09:17:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-0a7cd257efff997b0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:11 +0000 UTC,LastTransitionTime:2022-12-02 09:03:11 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:24:43 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:24:43 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:24:43 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:24:43 +0000 UTC,LastTransitionTime:2022-12-02 09:02:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.60.164,},NodeAddress{Type:ExternalIP,Address:13.212.105.239,},NodeAddress{Type:Hostname,Address:ip-172-20-60-164.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-60-164.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-212-105-239.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ab9d0d1126900acfd3b82032bd9b,SystemUUID:ec28ab9d-0d11-2690-0acf-d3b82032bd9b,BootID:925eb9d6-3c66-49ad-be43-0411968ca10c,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[docker.io/library/nginx@sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba docker.io/library/nginx:latest],SizeBytes:56833911,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0413aef78391fa3e6],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0413aef78391fa3e6,DevicePath:,},},Config:nil,},} Dec 2 09:25:48.594: INFO: Logging kubelet events for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:25:48.819: INFO: Logging pods the kubelet thinks is on node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:25:49.245: INFO: pod-terminate-status-2-14 started at 2022-12-02 09:19:29 +0000 UTC (1+1 container statuses recorded) Dec 2 09:25:49.245: INFO: Init container fail ready: false, restart count 0 Dec 2 09:25:49.245: INFO: Container blocked ready: false, restart count 0 Dec 2 09:25:49.245: INFO: pod-subpath-test-preprovisionedpv-wbdt started at 2022-12-02 09:25:47 +0000 UTC (2+2 container statuses recorded) Dec 2 09:25:49.245: INFO: Init container init-volume-preprovisionedpv-wbdt ready: true, restart count 0 Dec 2 09:25:49.245: INFO: Init container test-init-subpath-preprovisionedpv-wbdt ready: false, restart count 0 Dec 2 09:25:49.245: INFO: Container test-container-subpath-preprovisionedpv-wbdt ready: false, restart count 0 Dec 2 09:25:49.245: INFO: Container test-container-volume-preprovisionedpv-wbdt ready: false, restart count 0 Dec 2 09:25:49.245: INFO: ebs-csi-node-lrwc5 started at 2022-12-02 09:02:06 +0000 UTC (0+3 container statuses recorded) Dec 2 09:25:49.245: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:25:49.245: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:25:49.245: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:25:49.245: INFO: external-client started at 2022-12-02 09:19:27 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:49.245: INFO: Container external-client ready: true, restart count 0 Dec 2 09:25:49.245: INFO: hostexec-ip-172-20-60-164.ap-southeast-1.compute.internal-qrptd started at 2022-12-02 09:20:43 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:49.245: INFO: Container agnhost-container ready: true, restart count 0 Dec 2 09:25:49.245: INFO: externalsvc-kc489 started at 2022-12-02 09:19:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:49.245: INFO: Container externalsvc ready: true, restart count 0 Dec 2 09:25:49.245: INFO: csi-hostpathplugin-0 started at 2022-12-02 09:17:33 +0000 UTC (0+7 container statuses recorded) Dec 2 09:25:49.245: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:25:49.245: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:25:49.245: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:25:49.245: INFO: Container csi-snapshotter ready: true, restart count 0 Dec 2 09:25:49.245: INFO: Container hostpath ready: true, restart count 0 Dec 2 09:25:49.245: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:25:49.245: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:25:49.245: INFO: kube-proxy-ip-172-20-60-164.ap-southeast-1.compute.internal started at 2022-12-02 09:01:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:49.245: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:25:49.245: INFO: calico-node-gv4lf started at 2022-12-02 09:02:06 +0000 UTC (4+1 container statuses recorded) Dec 2 09:25:49.245: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:25:49.245: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:25:49.245: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:25:49.245: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:25:49.245: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:25:49.245: INFO: ss2-1 started at 2022-12-02 09:19:19 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:49.245: INFO: Container webserver ready: true, restart count 0 Dec 2 09:25:50.030: INFO: Latency metrics for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:25:50.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svc-latency-1624" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sNodeLease\sNodeLease\sthe\skubelet\sshould\sreport\snode\sstatus\sinfrequently$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 Dec 2 09:20:40.915: Unexpected error: <*fmt.wrapError | 0xc00312a000>: { msg: "unexpected error when reading response body. Please retry. Original error: http2: client connection lost", err: { s: "http2: client connection lost", }, } unexpected error when reading response body. Please retry. Original error: http2: client connection lost occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:189from junit_25.xml
[BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Dec 2 09:18:21.494: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename node-lease-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:43 [It] the kubelet should report node status infrequently /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:112 �[1mSTEP�[0m: wait until node is ready Dec 2 09:18:23.155: INFO: Waiting up to 5m0s for node ip-172-20-34-182.ap-southeast-1.compute.internal condition Ready to be true �[1mSTEP�[0m: wait until there is node lease �[1mSTEP�[0m: verify NodeStatus report period is longer than lease duration Dec 2 09:18:24.987: INFO: node status heartbeat is unchanged for 1.207946385s, waiting for 1m20s Dec 2 09:18:26.000: INFO: node status heartbeat is unchanged for 2.220577605s, waiting for 1m20s Dec 2 09:18:26.988: INFO: node status heartbeat is unchanged for 3.208226956s, waiting for 1m20s Dec 2 09:18:27.991: INFO: node status heartbeat is unchanged for 4.211856706s, waiting for 1m20s Dec 2 09:18:28.987: INFO: node status heartbeat is unchanged for 5.207639497s, waiting for 1m20s Dec 2 09:18:29.988: INFO: node status heartbeat is unchanged for 6.208580056s, waiting for 1m20s Dec 2 09:18:30.987: INFO: node status heartbeat is unchanged for 7.207186512s, waiting for 1m20s Dec 2 09:18:31.987: INFO: node status heartbeat is unchanged for 8.207975676s, waiting for 1m20s Dec 2 09:18:32.988: INFO: node status heartbeat is unchanged for 9.20886469s, waiting for 1m20s Dec 2 09:18:33.994: INFO: node status heartbeat is unchanged for 10.214257325s, waiting for 1m20s Dec 2 09:18:35.003: INFO: node status heartbeat is unchanged for 11.223395671s, waiting for 1m20s Dec 2 09:18:36.032: INFO: node status heartbeat is unchanged for 12.252232826s, waiting for 1m20s Dec 2 09:18:37.069: INFO: node status heartbeat is unchanged for 13.28991642s, waiting for 1m20s Dec 2 09:18:37.997: INFO: node status heartbeat is unchanged for 14.217727527s, waiting for 1m20s Dec 2 09:18:38.992: INFO: node status heartbeat is unchanged for 15.212724427s, waiting for 1m20s Dec 2 09:18:39.988: INFO: node status heartbeat is unchanged for 16.208497885s, waiting for 1m20s Dec 2 09:18:40.987: INFO: node status heartbeat is unchanged for 17.207383788s, waiting for 1m20s Dec 2 09:18:41.994: INFO: node status heartbeat is unchanged for 18.214210987s, waiting for 1m20s Dec 2 09:18:42.987: INFO: node status heartbeat is unchanged for 19.207181261s, waiting for 1m20s Dec 2 09:18:43.987: INFO: node status heartbeat is unchanged for 20.207363265s, waiting for 1m20s Dec 2 09:18:44.987: INFO: node status heartbeat is unchanged for 21.20723868s, waiting for 1m20s Dec 2 09:18:45.989: INFO: node status heartbeat is unchanged for 22.20990119s, waiting for 1m20s Dec 2 09:18:46.987: INFO: node status heartbeat is unchanged for 23.207843779s, waiting for 1m20s Dec 2 09:18:47.987: INFO: node status heartbeat is unchanged for 24.2071949s, waiting for 1m20s Dec 2 09:18:48.987: INFO: node status heartbeat is unchanged for 25.207889646s, waiting for 1m20s Dec 2 09:18:49.989: INFO: node status heartbeat is unchanged for 26.209415722s, waiting for 1m20s Dec 2 09:18:50.987: INFO: node status heartbeat is unchanged for 27.207284293s, waiting for 1m20s Dec 2 09:18:51.988: INFO: node status heartbeat is unchanged for 28.208194504s, waiting for 1m20s Dec 2 09:18:52.987: INFO: node status heartbeat is unchanged for 29.207377299s, waiting for 1m20s Dec 2 09:18:53.987: INFO: node status heartbeat is unchanged for 30.207250927s, waiting for 1m20s Dec 2 09:18:54.987: INFO: node status heartbeat is unchanged for 31.207323889s, waiting for 1m20s Dec 2 09:18:55.987: INFO: node status heartbeat is unchanged for 32.207368595s, waiting for 1m20s Dec 2 09:18:56.988: INFO: node status heartbeat is unchanged for 33.208522643s, waiting for 1m20s Dec 2 09:18:57.986: INFO: node status heartbeat is unchanged for 34.206987365s, waiting for 1m20s Dec 2 09:18:58.989: INFO: node status heartbeat is unchanged for 35.209911201s, waiting for 1m20s Dec 2 09:18:59.988: INFO: node status heartbeat is unchanged for 36.20818871s, waiting for 1m20s Dec 2 09:19:00.989: INFO: node status heartbeat is unchanged for 37.209951146s, waiting for 1m20s Dec 2 09:19:01.988: INFO: node status heartbeat is unchanged for 38.208306443s, waiting for 1m20s Dec 2 09:19:02.988: INFO: node status heartbeat is unchanged for 39.208606083s, waiting for 1m20s Dec 2 09:19:03.987: INFO: node status heartbeat is unchanged for 40.207724522s, waiting for 1m20s Dec 2 09:19:04.988: INFO: node status heartbeat is unchanged for 41.208154934s, waiting for 1m20s Dec 2 09:19:05.990: INFO: node status heartbeat is unchanged for 42.210476364s, waiting for 1m20s Dec 2 09:19:06.987: INFO: node status heartbeat is unchanged for 43.207108067s, waiting for 1m20s Dec 2 09:19:07.988: INFO: node status heartbeat is unchanged for 44.208098856s, waiting for 1m20s Dec 2 09:19:08.987: INFO: node status heartbeat is unchanged for 45.207731429s, waiting for 1m20s Dec 2 09:19:09.988: INFO: node status heartbeat is unchanged for 46.208202703s, waiting for 1m20s Dec 2 09:19:10.990: INFO: node status heartbeat is unchanged for 47.210662248s, waiting for 1m20s Dec 2 09:19:11.993: INFO: node status heartbeat is unchanged for 48.213755759s, waiting for 1m20s Dec 2 09:19:12.988: INFO: node status heartbeat is unchanged for 49.208545465s, waiting for 1m20s Dec 2 09:19:13.989: INFO: node status heartbeat is unchanged for 50.209578857s, waiting for 1m20s Dec 2 09:19:14.990: INFO: node status heartbeat is unchanged for 51.210443369s, waiting for 1m20s Dec 2 09:19:15.988: INFO: node status heartbeat is unchanged for 52.208169546s, waiting for 1m20s Dec 2 09:19:16.988: INFO: node status heartbeat is unchanged for 53.208106567s, waiting for 1m20s Dec 2 09:19:17.988: INFO: node status heartbeat is unchanged for 54.208504128s, waiting for 1m20s Dec 2 09:19:18.987: INFO: node status heartbeat is unchanged for 55.207877226s, waiting for 1m20s Dec 2 09:19:19.988: INFO: node status heartbeat is unchanged for 56.208101877s, waiting for 1m20s Dec 2 09:19:20.988: INFO: node status heartbeat is unchanged for 57.208965127s, waiting for 1m20s Dec 2 09:19:21.987: INFO: node status heartbeat is unchanged for 58.207271014s, waiting for 1m20s Dec 2 09:19:22.988: INFO: node status heartbeat is unchanged for 59.208566353s, waiting for 1m20s Dec 2 09:19:23.988: INFO: node status heartbeat is unchanged for 1m0.208473339s, waiting for 1m20s Dec 2 09:19:24.995: INFO: node status heartbeat is unchanged for 1m1.215212347s, waiting for 1m20s Dec 2 09:19:25.995: INFO: node status heartbeat is unchanged for 1m2.21572719s, waiting for 1m20s Dec 2 09:19:26.987: INFO: node status heartbeat is unchanged for 1m3.207601339s, waiting for 1m20s Dec 2 09:19:27.991: INFO: node status heartbeat is unchanged for 1m4.211547466s, waiting for 1m20s Dec 2 09:19:28.992: INFO: node status heartbeat is unchanged for 1m5.212897588s, waiting for 1m20s Dec 2 09:19:30.015: INFO: node status heartbeat is unchanged for 1m6.23541446s, waiting for 1m20s E1202 09:20:32.744518 6610 request.go:1101] Unexpected error when reading response body: http2: client connection lost Dec 2 09:20:40.912: FAIL: Unexpected error: <*fmt.wrapError | 0xc00312a000>: { msg: "unexpected error when reading response body. Please retry. Original error: http2: client connection lost", err: { s: "http2: client connection lost", }, } unexpected error when reading response body. Please retry. Original error: http2: client connection lost occurred Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.getHeartbeatTimeAndStatus({_, _}, {_, _}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:189 +0xcd k8s.io/kubernetes/test/e2e/common/node.glob..func13.2.3.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:138 +0xa7 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0xc003ccc480, 0xc0008e1c28}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d3b68, 0xc0001b2000}, 0xc002aa1170) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x79d3b68, 0xc0001b2000}, 0xc005229800, 0x2cc954a) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 +0x118 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d3b68, 0xc0001b2000}, 0x0, 0x2cc8045, 0x38) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:594 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x79d3b68, 0xc0001b2000}, 0x24a3293, 0xc003440e50, 0x2441ec7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:458 +0x47 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0xc001ebd260, 0xc001ebd290, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:443 +0x50 k8s.io/kubernetes/test/e2e/common/node.glob..func13.2.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:137 +0x53c k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2000100000001) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008e1a00, 0x735d4a0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a E1202 09:20:40.942444 6610 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Dec 2 09:20:40.915: Unexpected error:\n <*fmt.wrapError | 0xc00312a000>: {\n msg: \"unexpected error when reading response body. Please retry. Original error: http2: client connection lost\",\n err: {\n s: \"http2: client connection lost\",\n },\n }\n unexpected error when reading response body. Please retry. Original error: http2: client connection lost\noccurred", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go", Line:189, FullStackTrace:"k8s.io/kubernetes/test/e2e/common/node.getHeartbeatTimeAndStatus({_, _}, {_, _})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:189 +0xcd\nk8s.io/kubernetes/test/e2e/common/node.glob..func13.2.3.2()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:138 +0xa7\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0xc003ccc480, 0xc0008e1c28})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d3b68, 0xc0001b2000}, 0xc002aa1170)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x79d3b68, 0xc0001b2000}, 0xc005229800, 0x2cc954a)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 +0x118\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d3b68, 0xc0001b2000}, 0x0, 0x2cc8045, 0x38)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:594 +0x9a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x79d3b68, 0xc0001b2000}, 0x24a3293, 0xc003440e50, 0x2441ec7)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:458 +0x47\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0xc001ebd260, 0xc001ebd290, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:443 +0x50\nk8s.io/kubernetes/test/e2e/common/node.glob..func13.2.3()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:137 +0x53c\nk8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697\nk8s.io/kubernetes/test/e2e.TestE2E(0x2000100000001)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19\ntesting.tRunner(0xc0008e1a00, 0x735d4a0)\n\t/usr/local/go/src/testing/testing.go:1259 +0x102\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1306 +0x35a"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 142 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6c3bd00, 0xc003605040}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x7d k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75 panic({0x6c3bd00, 0xc003605040}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0x73 panic({0x62d47a0, 0x78aa9e0}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail({0xc0008a4d00, 0x18c}, {0xc003440300, 0x0, 0x40}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xdd k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0008a4d00, 0x18c}, {0xc0034403e0, 0x70cab8a, 0xc003440400}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7 k8s.io/kubernetes/test/e2e/framework.Fail({0xc00081d080, 0x177}, {0xc00460a070, 0xc00081d080, 0xc003440060}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:63 +0x149 k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).match(0xc003440548, {0x79bd678, 0xaa10408}, 0x0, {0x0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:79 +0x1bd k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion.(*Assertion).NotTo(0xc003440548, {0x79bd678, 0xaa10408}, {0x0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/gomega/internal/assertion/assertion.go:48 +0x92 k8s.io/kubernetes/test/e2e/framework.ExpectNoErrorWithOffset(0x0, {0x78b0d60, 0xc00312a000}, {0x0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:46 +0xa9 k8s.io/kubernetes/test/e2e/framework.ExpectNoError(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/expect.go:40 k8s.io/kubernetes/test/e2e/common/node.getHeartbeatTimeAndStatus({_, _}, {_, _}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:189 +0xcd k8s.io/kubernetes/test/e2e/common/node.glob..func13.2.3.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:138 +0xa7 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0xc003ccc480, 0xc0008e1c28}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d3b68, 0xc0001b2000}, 0xc002aa1170) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext({0x79d3b68, 0xc0001b2000}, 0xc005229800, 0x2cc954a) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:660 +0x118 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d3b68, 0xc0001b2000}, 0x0, 0x2cc8045, 0x38) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:594 +0x9a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollWithContext({0x79d3b68, 0xc0001b2000}, 0x24a3293, 0xc003440e50, 0x2441ec7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:458 +0x47 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Poll(0xc001ebd260, 0xc001ebd290, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:443 +0x50 k8s.io/kubernetes/test/e2e/common/node.glob..func13.2.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/node_lease.go:137 +0x53c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc0008e1ba0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xba k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0034415c8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x125 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002f60b40, 0xc003441990, {0x78b4560, 0xc0001e4800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x2a9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002f60b40, {0x78b4560, 0xc0001e4800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xe7 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc002ba9400, 0xc002f60b40) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0xe5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc002ba9400) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x1a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc002ba9400) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0xc5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0001fc070, {0x7fe320336d38, 0xc0008e1a00}, {0x710a6bd, 0x40}, {0xc000c90060, 0x3, 0x3}, {0x7a2bdb8, 0xc0001e4800}, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x4d2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters({0x78baec0, 0xc0008e1a00}, {0x710a6bd, 0x14}, {0xc000c96040, 0x3, 0x6}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x185 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x78baec0, 0xc0008e1a00}, {0x710a6bd, 0x14}, {0xc000c84000, 0x2, 0x2}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0xf9 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2000100000001) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0008e1a00, 0x735d4a0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-node] NodeLease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "node-lease-test-2317". �[1mSTEP�[0m: Found 0 events. Dec 2 09:20:41.949: INFO: POD NODE PHASE GRACE CONDITIONS Dec 2 09:20:41.949: INFO: Dec 2 09:20:42.592: INFO: Logging node info for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:42.804: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-34-182.ap-southeast-1.compute.internal fd7593c8-1a7c-4e6d-9018-4c36698568dc 38632 0 2022-12-02 09:02:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-34-182.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-34-182.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7299":"csi-mock-csi-mock-volumes-7299","ebs.csi.aws.com":"i-070fdf3c5d5f93304"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.34.182/19 projectcalico.org/IPv4IPIPTunnelAddr:100.116.72.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {Go-http-client Update v1 2022-12-02 09:03:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-070fdf3c5d5f93304,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:22 +0000 UTC,LastTransitionTime:2022-12-02 09:03:22 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:03:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.34.182,},NodeAddress{Type:ExternalIP,Address:54.169.57.14,},NodeAddress{Type:Hostname,Address:ip-172-20-34-182.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-34-182.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-169-57-14.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec264a17458d690f294e12b6a6b2138c,SystemUUID:ec264a17-458d-690f-294e-12b6a6b2138c,BootID:37b6e011-229a-4491-b86f-f149d97d10c0,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:20:42.807: INFO: Logging kubelet events for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:43.020: INFO: Logging pods the kubelet thinks is on node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:43.472: INFO: calico-node-xhqfx started at 2022-12-02 09:02:23 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:43.473: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:43.473: INFO: Init container install-cni ready: true, restart count 0 Dec 2 09:20:43.473: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:43.473: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:43.473: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:43.473: INFO: startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c started at 2022-12-02 09:16:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.473: INFO: Container busybox ready: false, restart count 0 Dec 2 09:20:43.473: INFO: test-ss-0 started at 2022-12-02 09:17:26 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.473: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:43.473: INFO: kube-proxy-ip-172-20-34-182.ap-southeast-1.compute.internal started at 2022-12-02 09:02:02 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.473: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:43.473: INFO: ebs-csi-node-4b4zl started at 2022-12-02 09:02:23 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:43.473: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:43.473: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:43.473: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:43.473: INFO: coredns-5556cb978d-bx2m5 started at 2022-12-02 09:03:10 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.473: INFO: Container coredns ready: true, restart count 0 Dec 2 09:20:43.473: INFO: csi-mockplugin-0 started at 2022-12-02 09:18:23 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:43.473: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:43.473: INFO: Container driver-registrar ready: true, restart count 0 Dec 2 09:20:43.473: INFO: Container mock ready: true, restart count 0 Dec 2 09:20:43.473: INFO: ss2-2 started at 2022-12-02 09:19:22 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.473: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:43.473: INFO: simpletest.rc-rptqs started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.473: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.473: INFO: pod-client started at 2022-12-02 09:19:00 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.473: INFO: Container pod-client ready: true, restart count 0 Dec 2 09:20:43.473: INFO: simpletest.rc-w9lsq started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.473: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.473: INFO: simpletest.rc-tfx9v started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.473: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.473: INFO: simpletest.rc-swnct started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.473: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.473: INFO: simpletest.rc-rlzhz started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.473: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.473: INFO: simpletest.rc-ntn9m started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.473: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.221: INFO: Latency metrics for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:44.221: INFO: Logging node info for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:44.470: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-90.ap-southeast-1.compute.internal f779b12d-0e95-4e7f-929e-368941a29b99 40279 0 2022-12-02 09:02:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-90.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-37-90.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-001dd83f455b4a895"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.37.90/19 projectcalico.org/IPv4IPIPTunnelAddr:100.114.18.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:02:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:19:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-001dd83f455b4a895,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:05 +0000 UTC,LastTransitionTime:2022-12-02 09:03:05 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:02:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.90,},NodeAddress{Type:ExternalIP,Address:13.212.195.103,},NodeAddress{Type:Hostname,Address:ip-172-20-37-90.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-90.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-212-195-103.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec216e9b184e3e44fb8ed6af9b651047,SystemUUID:ec216e9b-184e-3e44-fb8e-d6af9b651047,BootID:0bbb1eb8-60c7-4bb1-b8c7-bb110f238f78,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:20:44.471: INFO: Logging kubelet events for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:44.690: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:44.910: INFO: coredns-5556cb978d-pztr5 started at 2022-12-02 09:02:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container coredns ready: true, restart count 0 Dec 2 09:20:44.910: INFO: simpletest.rc-xqqbd started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.910: INFO: ebs-csi-node-vswvn started at 2022-12-02 09:02:04 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:44.910: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:44.910: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:44.910: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:44.910: INFO: test-ss-1 started at 2022-12-02 09:18:26 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:44.910: INFO: agnhost-primary-dgxqj started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container agnhost-primary ready: false, restart count 0 Dec 2 09:20:44.910: INFO: pod-secrets-0da0406d-ca0f-4f4d-84a5-33a16c483cff started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container secret-volume-test ready: false, restart count 0 Dec 2 09:20:44.910: INFO: pod-terminate-status-0-14 started at 2022-12-02 09:20:41 +0000 UTC (1+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Init container fail ready: false, restart count 0 Dec 2 09:20:44.910: INFO: Container blocked ready: false, restart count 0 Dec 2 09:20:44.910: INFO: execpodws7zw started at 2022-12-02 09:20:42 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container agnhost-container ready: false, restart count 0 Dec 2 09:20:44.910: INFO: simpletest.rc-zj2ft started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.910: INFO: test-webserver-98190dda-eab4-4a0b-a4ec-afbb6264f9c0 started at 2022-12-02 09:18:17 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container test-webserver ready: true, restart count 0 Dec 2 09:20:44.910: INFO: coredns-autoscaler-85fcbbb64-kb6k7 started at 2022-12-02 09:02:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container autoscaler ready: true, restart count 0 Dec 2 09:20:44.910: INFO: simpletest.rc-njxsz started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.910: INFO: kube-proxy-ip-172-20-37-90.ap-southeast-1.compute.internal started at 2022-12-02 09:01:54 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:44.910: INFO: calico-node-cqg7n started at 2022-12-02 09:02:04 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:44.910: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:20:44.910: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:44.910: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:44.910: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:44.910: INFO: httpd started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container httpd ready: false, restart count 0 Dec 2 09:20:44.910: INFO: bin-falseb55ef21a-ef68-4260-9830-7a34a8977c97 started at 2022-12-02 09:20:42 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container bin-falseb55ef21a-ef68-4260-9830-7a34a8977c97 ready: false, restart count 0 Dec 2 09:20:44.910: INFO: simpletest.rc-r9d9b started at 2022-12-02 09:18:34 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.910: INFO: simpletest.rc-t5ztv started at 2022-12-02 09:18:31 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.910: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:45.932: INFO: Latency metrics for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:45.932: INFO: Logging node info for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:46.142: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-49-67.ap-southeast-1.compute.internal 81600d2c-3d2a-4421-913e-e1c53c1ad1df 41217 0 2022-12-02 09:02:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a io.kubernetes.storage.mock/node:some-mock-node kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-49-67.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-49-67.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1102":"ip-172-20-49-67.ap-southeast-1.compute.internal","ebs.csi.aws.com":"i-056f60b74d454bea7"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.49.67/19 projectcalico.org/IPv4IPIPTunnelAddr:100.106.24.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:03:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-02 09:18:47 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-12-02 09:18:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-056f60b74d454bea7,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:19 +0000 UTC,LastTransitionTime:2022-12-02 09:03:19 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:03:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.49.67,},NodeAddress{Type:ExternalIP,Address:13.228.79.89,},NodeAddress{Type:Hostname,Address:ip-172-20-49-67.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-49-67.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-228-79-89.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2bd833fc2a274ccf3bf225f245ddce,SystemUUID:ec2bd833-fc2a-274c-cf3b-f225f245ddce,BootID:1ab59414-4d0c-4bc8-bb64-5f41a1b02c74,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[docker.io/library/nginx@sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba docker.io/library/nginx:latest],SizeBytes:56833911,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:dd6d57960dc104a4ee0fa7c58c6faa3e38725561af374c17f8cb905f7f73ba66 k8s.gcr.io/build-image/debian-iptables:bullseye-v1.1.0],SizeBytes:27059231,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-086a725fe4e89b13b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-086a725fe4e89b13b,DevicePath:,},},Config:nil,},} Dec 2 09:20:46.142: INFO: Logging kubelet events for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:46.358: INFO: Logging pods the kubelet thinks is on node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:46.577: INFO: simpletest.rc-q75ts started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.577: INFO: kube-proxy-ip-172-20-49-67.ap-southeast-1.compute.internal started at 2022-12-02 09:01:59 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:46.577: INFO: simpletest.rc-qfccr started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.577: INFO: ss2-0 started at 2022-12-02 09:19:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:46.577: INFO: oidc-discovery-validator started at 2022-12-02 09:19:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container oidc-discovery-validator ready: false, restart count 0 Dec 2 09:20:46.577: INFO: simpletest.rc-sdlx6 started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.577: INFO: simpletest.rc-vjkr4 started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.577: INFO: ebs-csi-node-w9kzj started at 2022-12-02 09:02:20 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:46.577: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:46.577: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:46.577: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:46.577: INFO: default started at 2022-12-02 09:20:44 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:46.577: INFO: csi-hostpathplugin-0 started at 2022-12-02 09:18:29 +0000 UTC (0+7 container statuses recorded) Dec 2 09:20:46.577: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:20:46.577: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:46.577: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:20:46.577: INFO: Container csi-snapshotter ready: true, restart count 0 Dec 2 09:20:46.577: INFO: Container hostpath ready: true, restart count 0 Dec 2 09:20:46.577: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:46.577: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:46.577: INFO: simpletest.rc-nxlcw started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.577: INFO: simpletest.rc-s8s8z started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.577: INFO: private started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:46.577: INFO: externalsvc-gfw8b started at 2022-12-02 09:19:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container externalsvc ready: true, restart count 0 Dec 2 09:20:46.577: INFO: downwardapi-volume-e3f86704-2ad4-4471-80f7-f49d1890acfa started at 2022-12-02 09:20:44 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container client-container ready: false, restart count 0 Dec 2 09:20:46.577: INFO: simpletest.rc-xt5qf started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.577: INFO: slave started at 2022-12-02 09:19:22 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.577: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:46.578: INFO: ss-0 started at 2022-12-02 09:19:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.578: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:46.578: INFO: svc-latency-rc-n6rnr started at 2022-12-02 09:19:15 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.578: INFO: Container svc-latency-rc ready: true, restart count 0 Dec 2 09:20:46.578: INFO: calico-node-n6lj9 started at 2022-12-02 09:02:20 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:46.578: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:46.578: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:20:46.578: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:46.578: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:46.578: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:46.578: INFO: master started at 2022-12-02 09:19:13 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.578: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:46.578: INFO: simpletest.rc-s98w8 started at 2022-12-02 09:18:31 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.578: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:47.856: INFO: Latency metrics for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:47.857: INFO: Logging node info for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:48.073: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-55-194.ap-southeast-1.compute.internal 890854e9-f510-402d-9886-49c1d41318f4 34763 0 2022-12-02 09:00:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:master-ap-southeast-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-55-194.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-00b46fae03d775a19"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.55.194/19 projectcalico.org/IPv4IPIPTunnelAddr:100.104.201.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-02 09:00:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2022-12-02 09:01:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2022-12-02 09:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2022-12-02 09:01:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {Go-http-client Update v1 2022-12-02 09:02:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:02:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-00b46fae03d775a19,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3894931456 0} {<nil>} 3803644Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790073856 0} {<nil>} 3701244Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:02:00 +0000 UTC,LastTransitionTime:2022-12-02 09:02:00 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:01:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.194,},NodeAddress{Type:ExternalIP,Address:54.169.84.77,},NodeAddress{Type:Hostname,Address:ip-172-20-55-194.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-55-194.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-169-84-77.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2521391aeba8d2805b54ac578aa7d0,SystemUUID:ec252139-1aeb-a8d2-805b-54ac578aa7d0,BootID:4e785fe8-5068-4fd6-b8b0-5a4aae03c815,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:256a64fb44876d270f04ada1afd3ca431341f249aa52cbe2b3780f8f23961142 registry.k8s.io/etcdadm/etcd-manager:v3.0.20220727],SizeBytes:216364516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.14],SizeBytes:136567243,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.14],SizeBytes:126380852,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.14],SizeBytes:54860595,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:58cc91c551e9e941a752e205eefed1c8da56f97a51e054b3d341b67bb7bf27eb docker.io/calico/kube-controllers:v3.23.5],SizeBytes:53774679,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.24.5],SizeBytes:41269276,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.24.5],SizeBytes:40816784,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.24.5],SizeBytes:5130223,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:20:48.096: INFO: Logging kubelet events for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:48.338: INFO: Logging pods the kubelet thinks is on node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:48.636: INFO: kube-scheduler-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.638: INFO: Container kube-scheduler ready: true, restart count 0 Dec 2 09:20:48.639: INFO: calico-node-xfrb9 started at 2022-12-02 09:01:32 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:48.639: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:48.639: INFO: Init container install-cni ready: true, restart count 0 Dec 2 09:20:48.639: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:48.639: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:48.639: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:48.639: INFO: kops-controller-7l85j started at 2022-12-02 09:01:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.639: INFO: Container kops-controller ready: true, restart count 0 Dec 2 09:20:48.639: INFO: etcd-manager-events-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.639: INFO: Container etcd-manager ready: true, restart count 0 Dec 2 09:20:48.639: INFO: etcd-manager-main-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.639: INFO: Container etcd-manager ready: true, restart count 0 Dec 2 09:20:48.639: INFO: kube-apiserver-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+2 container statuses recorded) Dec 2 09:20:48.639: INFO: Container healthcheck ready: true, restart count 0 Dec 2 09:20:48.639: INFO: Container kube-apiserver ready: true, restart count 1 Dec 2 09:20:48.639: INFO: kube-controller-manager-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.639: INFO: Container kube-controller-manager ready: true, restart count 2 Dec 2 09:20:48.639: INFO: kube-proxy-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.639: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:48.639: INFO: ebs-csi-controller-55c8659c7c-sqq7m started at 2022-12-02 09:01:32 +0000 UTC (0+5 container statuses recorded) Dec 2 09:20:48.639: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:20:48.639: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:48.639: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:20:48.639: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:48.639: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:48.639: INFO: ebs-csi-node-rfwfq started at 2022-12-02 09:01:32 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:48.639: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:48.639: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:48.639: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:48.639: INFO: dns-controller-847484c97f-z8rs4 started at 2022-12-02 09:01:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.639: INFO: Container dns-controller ready: true, restart count 0 Dec 2 09:20:48.639: INFO: calico-kube-controllers-795c657547-9mz5t started at 2022-12-02 09:01:48 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.639: INFO: Container calico-kube-controllers ready: true, restart count 0 Dec 2 09:20:50.190: INFO: Latency metrics for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:50.215: INFO: Logging node info for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:50.751: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-60-164.ap-southeast-1.compute.internal 4d06e01c-27c4-4c2f-b118-647413c7ddf6 40537 0 2022-12-02 09:02:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-60-164.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-60-164.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9857":"ip-172-20-60-164.ap-southeast-1.compute.internal","ebs.csi.aws.com":"i-0a7cd257efff997b0"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.60.164/19 projectcalico.org/IPv4IPIPTunnelAddr:100.106.61.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:02:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-02 09:17:54 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-12-02 09:17:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-0a7cd257efff997b0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:11 +0000 UTC,LastTransitionTime:2022-12-02 09:03:11 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:38 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:38 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:38 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:19:38 +0000 UTC,LastTransitionTime:2022-12-02 09:02:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.60.164,},NodeAddress{Type:ExternalIP,Address:13.212.105.239,},NodeAddress{Type:Hostname,Address:ip-172-20-60-164.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-60-164.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-212-105-239.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ab9d0d1126900acfd3b82032bd9b,SystemUUID:ec28ab9d-0d11-2690-0acf-d3b82032bd9b,BootID:925eb9d6-3c66-49ad-be43-0411968ca10c,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[docker.io/library/nginx@sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba docker.io/library/nginx:latest],SizeBytes:56833911,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0413aef78391fa3e6],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0413aef78391fa3e6,DevicePath:,},},Config:nil,},} Dec 2 09:20:51.002: INFO: Logging kubelet events for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:51.407: INFO: Logging pods the kubelet thinks is on node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:55.076: INFO: kube-proxy-ip-172-20-60-164.ap-southeast-1.compute.internal started at 2022-12-02 09:01:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:55.103: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:55.107: INFO: calico-node-gv4lf started at 2022-12-02 09:02:06 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:55.107: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:55.110: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:20:55.110: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:55.110: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:55.110: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:55.110: INFO: ss2-1 started at 2022-12-02 09:19:19 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:55.110: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:55.110: INFO: pod-terminate-status-2-14 started at 2022-12-02 09:19:29 +0000 UTC (1+1 container statuses recorded) Dec 2 09:20:55.110: INFO: Init container fail ready: false, restart count 0 Dec 2 09:20:55.110: INFO: Container blocked ready: false, restart count 0 Dec 2 09:20:55.110: INFO: ebs-csi-node-lrwc5 started at 2022-12-02 09:02:06 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:55.110: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:55.110: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:55.110: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:55.110: INFO: external-client started at 2022-12-02 09:19:27 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:55.110: INFO: Container external-client ready: true, restart count 0 Dec 2 09:20:55.110: INFO: externalsvc-kc489 started at 2022-12-02 09:19:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:55.110: INFO: Container externalsvc ready: true, restart count 0 Dec 2 09:20:55.110: INFO: csi-hostpathplugin-0 started at 2022-12-02 09:17:33 +0000 UTC (0+7 container statuses recorded) Dec 2 09:20:55.110: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:20:55.110: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:55.110: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:20:55.110: INFO: Container csi-snapshotter ready: true, restart count 0 Dec 2 09:20:55.110: INFO: Container hostpath ready: true, restart count 0 Dec 2 09:20:55.110: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:55.110: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:55.110: INFO: hostexec-ip-172-20-60-164.ap-southeast-1.compute.internal-qrptd started at 2022-12-02 09:20:43 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:55.110: INFO: Container agnhost-container ready: true, restart count 0 Dec 2 09:21:02.391: INFO: Latency metrics for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:21:02.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "node-lease-test-2317" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sPods\sExtended\sPod\sContainer\sStatus\sshould\snever\sreport\scontainer\sstart\swhen\san\sinit\scontainer\sfails$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:214 Dec 2 09:20:40.889: failed to take action Unexpected error: <*url.Error | 0xc004702000>: { Op: "Delete", URL: "https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io/api/v1/namespaces/pods-5116/pods/pod-terminate-status-2-14", Err: { s: "http2: client connection lost", }, } Delete "https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io/api/v1/namespaces/pods-5116/pods/pod-terminate-status-2-14": http2: client connection lost occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:414from junit_01.xml
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":30,"skipped":271,"failed":0} [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Dec 2 09:17:38.137: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Pod Container Status /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:202 [It] should never report container start when an init container fails /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:214 �[1mSTEP�[0m: creating pods with an init container that always exit 1 and terminating the pod after a random delay Dec 2 09:17:42.677: INFO: watch last event seen for pod-terminate-status-0-0 Dec 2 09:17:42.677: INFO: Pod pod-terminate-status-0-0 on node ip-172-20-34-182.ap-southeast-1.compute.internal t=294ms total=3.066804878s run=0s execute=0s Dec 2 09:17:43.658: INFO: watch last event seen for pod-terminate-status-2-0 Dec 2 09:17:43.658: INFO: Pod pod-terminate-status-2-0 on node ip-172-20-34-182.ap-southeast-1.compute.internal t=200ms total=4.047570267s run=0s execute=0s Dec 2 09:17:46.416: INFO: watch last event seen for pod-terminate-status-0-1 Dec 2 09:17:46.416: INFO: Pod pod-terminate-status-0-1 on node ip-172-20-37-90.ap-southeast-1.compute.internal t=787ms total=3.738342993s run=1s execute=0s Dec 2 09:17:47.843: INFO: watch last event seen for pod-terminate-status-2-1 Dec 2 09:17:47.843: INFO: Pod pod-terminate-status-2-1 on node ip-172-20-34-182.ap-southeast-1.compute.internal t=105ms total=4.184424174s run=0s execute=0s Dec 2 09:17:48.441: INFO: watch last event seen for pod-terminate-status-1-0 Dec 2 09:17:48.441: INFO: Pod pod-terminate-status-1-0 on node ip-172-20-34-182.ap-southeast-1.compute.internal t=934ms total=8.83075791s run=2s execute=0s Dec 2 09:17:52.652: INFO: watch last event seen for pod-terminate-status-0-2 Dec 2 09:17:52.653: INFO: Pod pod-terminate-status-0-2 on node ip-172-20-34-182.ap-southeast-1.compute.internal t=287ms total=6.23650802s run=0s execute=0s Dec 2 09:17:53.644: INFO: watch last event seen for pod-terminate-status-2-2 Dec 2 09:17:53.644: INFO: Pod pod-terminate-status-2-2 on node ip-172-20-34-182.ap-southeast-1.compute.internal t=1.737s total=5.801409956s run=1s execute=0s Dec 2 09:17:54.646: INFO: watch last event seen for pod-terminate-status-1-1 Dec 2 09:17:54.646: INFO: Pod pod-terminate-status-1-1 on node ip-172-20-34-182.ap-southeast-1.compute.internal t=1.96s total=6.20465648s run=1s execute=0s Dec 2 09:17:59.168: INFO: watch last event seen for pod-terminate-status-2-3 Dec 2 09:17:59.168: INFO: Pod pod-terminate-status-2-3 on node ip-172-20-37-90.ap-southeast-1.compute.internal t=836ms total=5.524060382s run=3s execute=0s Dec 2 09:17:59.245: INFO: watch last event seen for pod-terminate-status-1-2 Dec 2 09:17:59.245: INFO: Pod pod-terminate-status-1-2 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=1.472s total=4.598867965s run=3s execute=0s Dec 2 09:18:02.329: INFO: watch last event seen for pod-terminate-status-0-3 Dec 2 09:18:02.329: INFO: Pod pod-terminate-status-0-3 on node ip-172-20-37-90.ap-southeast-1.compute.internal t=840ms total=9.676693013s run=2s execute=0s Dec 2 09:18:04.920: INFO: watch last event seen for pod-terminate-status-0-4 Dec 2 09:18:04.920: INFO: Pod pod-terminate-status-0-4 on node ip-172-20-60-164.ap-southeast-1.compute.internal t=76ms total=2.590869688s run=1s execute=0s Dec 2 09:18:06.331: INFO: watch last event seen for pod-terminate-status-1-3 Dec 2 09:18:06.332: INFO: Pod pod-terminate-status-1-3 on node ip-172-20-37-90.ap-southeast-1.compute.internal t=949ms total=7.086312886s run=1s execute=0s Dec 2 09:18:08.931: INFO: watch last event seen for pod-terminate-status-2-4 Dec 2 09:18:08.931: INFO: Pod pod-terminate-status-2-4 on node ip-172-20-37-90.ap-southeast-1.compute.internal t=152ms total=9.76297314s run=1s execute=0s Dec 2 09:18:11.335: INFO: watch last event seen for pod-terminate-status-0-5 Dec 2 09:18:11.335: INFO: Pod pod-terminate-status-0-5 on node ip-172-20-37-90.ap-southeast-1.compute.internal t=1.309s total=6.414238229s run=0s execute=0s Dec 2 09:18:12.269: INFO: watch last event seen for pod-terminate-status-2-5 Dec 2 09:18:12.269: INFO: Pod pod-terminate-status-2-5 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=734ms total=3.338058101s run=1s execute=0s Dec 2 09:18:14.131: INFO: watch last event seen for pod-terminate-status-1-4 Dec 2 09:18:14.131: INFO: Pod pod-terminate-status-1-4 on node ip-172-20-37-90.ap-southeast-1.compute.internal t=399ms total=7.799599039s run=0s execute=0s Dec 2 09:18:16.817: INFO: watch last event seen for pod-terminate-status-0-6 Dec 2 09:18:16.817: INFO: Pod pod-terminate-status-0-6 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=1.043s total=5.481920207s run=1s execute=0s Dec 2 09:18:20.417: INFO: watch last event seen for pod-terminate-status-2-6 Dec 2 09:18:20.417: INFO: Pod pod-terminate-status-2-6 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=1.344s total=8.147280781s run=1s execute=0s Dec 2 09:18:20.731: INFO: watch last event seen for pod-terminate-status-1-5 Dec 2 09:18:20.731: INFO: Pod pod-terminate-status-1-5 on node ip-172-20-37-90.ap-southeast-1.compute.internal t=1.192s total=6.599472281s run=0s execute=0s Dec 2 09:18:22.926: INFO: watch last event seen for pod-terminate-status-0-7 Dec 2 09:18:22.926: INFO: Pod pod-terminate-status-0-7 on node ip-172-20-37-90.ap-southeast-1.compute.internal t=449ms total=6.109440523s run=0s execute=0s Dec 2 09:18:26.815: INFO: watch last event seen for pod-terminate-status-1-6 Dec 2 09:18:26.816: INFO: Pod pod-terminate-status-1-6 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=33ms total=6.084544293s run=0s execute=0s Dec 2 09:18:29.218: INFO: watch last event seen for pod-terminate-status-0-8 Dec 2 09:18:29.218: INFO: Pod pod-terminate-status-0-8 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=864ms total=6.291740001s run=0s execute=0s Dec 2 09:18:30.084: INFO: watch last event seen for pod-terminate-status-1-7 Dec 2 09:18:30.084: INFO: Pod pod-terminate-status-1-7 on node ip-172-20-60-164.ap-southeast-1.compute.internal t=1.985s total=3.268498661s run=1s execute=0s Dec 2 09:18:31.016: INFO: watch last event seen for pod-terminate-status-2-7 Dec 2 09:18:31.016: INFO: Pod pod-terminate-status-2-7 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=1.786s total=10.599278887s run=2s execute=0s Dec 2 09:18:44.615: INFO: watch last event seen for pod-terminate-status-0-9 Dec 2 09:18:44.615: INFO: Pod pod-terminate-status-0-9 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=990ms total=15.397127634s run=0s execute=0s Dec 2 09:18:45.845: INFO: watch last event seen for pod-terminate-status-2-8 Dec 2 09:18:45.845: INFO: Pod pod-terminate-status-2-8 on node ip-172-20-60-164.ap-southeast-1.compute.internal t=1.943s total=14.829097629s run=3s execute=0s Dec 2 09:18:46.447: INFO: watch last event seen for pod-terminate-status-1-8 Dec 2 09:18:46.447: INFO: Pod pod-terminate-status-1-8 on node ip-172-20-60-164.ap-southeast-1.compute.internal t=1.958s total=16.363230242s run=1s execute=0s Dec 2 09:18:52.047: INFO: watch last event seen for pod-terminate-status-1-9 Dec 2 09:18:52.047: INFO: Pod pod-terminate-status-1-9 on node ip-172-20-60-164.ap-southeast-1.compute.internal t=245ms total=5.599724233s run=1s execute=0s Dec 2 09:18:57.449: INFO: watch last event seen for pod-terminate-status-0-10 Dec 2 09:18:57.449: INFO: Pod pod-terminate-status-0-10 on node ip-172-20-60-164.ap-southeast-1.compute.internal t=230ms total=12.833616811s run=1s execute=0s Dec 2 09:19:01.456: INFO: watch last event seen for pod-terminate-status-1-10 Dec 2 09:19:01.456: INFO: Pod pod-terminate-status-1-10 on node ip-172-20-60-164.ap-southeast-1.compute.internal t=1.437s total=9.40913643s run=1s execute=0s Dec 2 09:19:06.017: INFO: watch last event seen for pod-terminate-status-2-9 Dec 2 09:19:06.018: INFO: Pod pod-terminate-status-2-9 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=1.182s total=20.172145958s run=0s execute=0s Dec 2 09:19:07.537: INFO: watch last event seen for pod-terminate-status-1-11 Dec 2 09:19:07.537: INFO: Pod pod-terminate-status-1-11 on node ip-172-20-60-164.ap-southeast-1.compute.internal t=1.462s total=6.080540078s run=2s execute=0s Dec 2 09:19:12.645: INFO: watch last event seen for pod-terminate-status-2-10 Dec 2 09:19:12.645: INFO: Pod pod-terminate-status-2-10 on node ip-172-20-60-164.ap-southeast-1.compute.internal t=1.689s total=6.627104373s run=1s execute=0s Dec 2 09:19:15.217: INFO: watch last event seen for pod-terminate-status-0-11 Dec 2 09:19:15.217: INFO: Pod pod-terminate-status-0-11 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=1.58s total=17.768350358s run=2s execute=0s Dec 2 09:19:15.446: INFO: watch last event seen for pod-terminate-status-2-11 Dec 2 09:19:15.446: INFO: Pod pod-terminate-status-2-11 on node ip-172-20-60-164.ap-southeast-1.compute.internal t=303ms total=2.801376472s run=2s execute=0s Dec 2 09:19:17.420: INFO: watch last event seen for pod-terminate-status-1-12 Dec 2 09:19:17.420: INFO: Pod pod-terminate-status-1-12 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=1.833s total=9.883092144s run=2s execute=0s Dec 2 09:19:21.414: INFO: watch last event seen for pod-terminate-status-2-12 Dec 2 09:19:21.414: INFO: Pod pod-terminate-status-2-12 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=223ms total=5.967512345s run=3s execute=0s Dec 2 09:19:22.434: INFO: watch last event seen for pod-terminate-status-0-12 Dec 2 09:19:22.434: INFO: Pod pod-terminate-status-0-12 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=1.309s total=7.21708066s run=1s execute=0s Dec 2 09:19:24.045: INFO: watch last event seen for pod-terminate-status-1-13 Dec 2 09:19:24.046: INFO: Pod pod-terminate-status-1-13 on node ip-172-20-60-164.ap-southeast-1.compute.internal t=1.559s total=6.625783128s run=1s execute=0s Dec 2 09:19:29.048: INFO: watch last event seen for pod-terminate-status-1-14 Dec 2 09:19:29.049: INFO: Pod pod-terminate-status-1-14 on node ip-172-20-60-164.ap-southeast-1.compute.internal t=318ms total=5.002867274s run=1s execute=0s Dec 2 09:19:29.461: INFO: watch last event seen for pod-terminate-status-2-13 Dec 2 09:19:29.462: INFO: Pod pod-terminate-status-2-13 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=1.712s total=8.047682661s run=0s execute=0s Dec 2 09:20:40.883: FAIL: failed to take action Unexpected error: <*url.Error | 0xc004702000>: { Op: "Delete", URL: "https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io/api/v1/namespaces/pods-5116/pods/pod-terminate-status-2-14", Err: { s: "http2: client connection lost", }, } Delete "https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io/api/v1/namespaces/pods-5116/pods/pod-terminate-status-2-14": http2: client connection lost occurred Full Stack Trace k8s.io/kubernetes/test/e2e/node.createAndTestPodRepeatedly.func1(0xc003c98fb8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:414 +0x46d created by k8s.io/kubernetes/test/e2e/node.createAndTestPodRepeatedly /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:366 +0x338 Dec 2 09:20:40.897: INFO: watch last event seen for pod-terminate-status-0-13 Dec 2 09:20:40.902: INFO: Pod pod-terminate-status-0-13 on node ip-172-20-49-67.ap-southeast-1.compute.internal t=314ms total=1m18.466768875s run=3s execute=0s Dec 2 09:20:40.900: INFO: watch error seen for pod-terminate-status-2-14: &v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"unable to decode an event from the watch stream: http2: client connection lost\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00110a180), Code:500} Dec 2 09:20:49.053: INFO: watch last event seen for pod-terminate-status-0-14 Dec 2 09:20:49.057: INFO: Pod pod-terminate-status-0-14 on node ip-172-20-37-90.ap-southeast-1.compute.internal t=1.325s total=8.1223694s run=4s execute=0s Dec 2 09:20:49.089: INFO: Summary of latencies: # HELP latency # TYPE latency summary latency{node="ip-172-20-34-182.ap-southeast-1.compute.internal",quantile="0.5"} 5.801409956 latency{node="ip-172-20-34-182.ap-southeast-1.compute.internal",quantile="0.75"} 6.23650802 latency{node="ip-172-20-34-182.ap-southeast-1.compute.internal",quantile="0.9"} 8.830757909999999 latency{node="ip-172-20-34-182.ap-southeast-1.compute.internal",quantile="0.99"} 8.830757909999999 latency_sum{node="ip-172-20-34-182.ap-southeast-1.compute.internal"} 38.37213168499999 latency_count{node="ip-172-20-34-182.ap-southeast-1.compute.internal"} 7 latency{node="ip-172-20-37-90.ap-southeast-1.compute.internal",quantile="0.5"} 6.599472281 latency{node="ip-172-20-37-90.ap-southeast-1.compute.internal",quantile="0.75"} 8.1223694 latency{node="ip-172-20-37-90.ap-southeast-1.compute.internal",quantile="0.9"} 9.676693013 latency{node="ip-172-20-37-90.ap-southeast-1.compute.internal",quantile="0.99"} 9.76297314 latency_sum{node="ip-172-20-37-90.ap-southeast-1.compute.internal"} 70.833501886 latency_count{node="ip-172-20-37-90.ap-southeast-1.compute.internal"} 10 latency{node="ip-172-20-49-67.ap-southeast-1.compute.internal",quantile="0.5"} 8.047682661 latency{node="ip-172-20-49-67.ap-southeast-1.compute.internal",quantile="0.75"} 15.397127634 latency{node="ip-172-20-49-67.ap-southeast-1.compute.internal",quantile="0.9"} 20.172145958 latency{node="ip-172-20-49-67.ap-southeast-1.compute.internal",quantile="0.99"} 78.466768875 latency_sum{node="ip-172-20-49-67.ap-southeast-1.compute.internal"} 207.46145087000002 latency_count{node="ip-172-20-49-67.ap-southeast-1.compute.internal"} 15 latency{node="ip-172-20-60-164.ap-southeast-1.compute.internal",quantile="0.5"} 6.080540078 latency{node="ip-172-20-60-164.ap-southeast-1.compute.internal",quantile="0.75"} 9.40913643 latency{node="ip-172-20-60-164.ap-southeast-1.compute.internal",quantile="0.9"} 14.829097629 latency{node="ip-172-20-60-164.ap-southeast-1.compute.internal",quantile="0.99"} 16.363230242 latency_sum{node="ip-172-20-60-164.ap-southeast-1.compute.internal"} 92.03184501899999 latency_count{node="ip-172-20-60-164.ap-southeast-1.compute.internal"} 12 [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "pods-5116". �[1mSTEP�[0m: Found 183 events. Dec 2 09:20:49.935: INFO: At 2022-12-02 09:17:39 +0000 UTC - event for pod-terminate-status-0-0: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-0 to ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:39 +0000 UTC - event for pod-terminate-status-1-0: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-0 to ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:39 +0000 UTC - event for pod-terminate-status-2-0: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-0 to ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:40 +0000 UTC - event for pod-terminate-status-0-0: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:40 +0000 UTC - event for pod-terminate-status-0-0: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:40 +0000 UTC - event for pod-terminate-status-0-0: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:41 +0000 UTC - event for pod-terminate-status-1-0: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:41 +0000 UTC - event for pod-terminate-status-1-0: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:41 +0000 UTC - event for pod-terminate-status-1-0: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:41 +0000 UTC - event for pod-terminate-status-2-0: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:41 +0000 UTC - event for pod-terminate-status-2-0: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:41 +0000 UTC - event for pod-terminate-status-2-0: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:42 +0000 UTC - event for pod-terminate-status-0-1: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-1 to ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:43 +0000 UTC - event for pod-terminate-status-0-1: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:43 +0000 UTC - event for pod-terminate-status-0-1: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:43 +0000 UTC - event for pod-terminate-status-0-1: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:43 +0000 UTC - event for pod-terminate-status-2-1: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-1 to ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:45 +0000 UTC - event for pod-terminate-status-2-1: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:45 +0000 UTC - event for pod-terminate-status-2-1: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:45 +0000 UTC - event for pod-terminate-status-2-1: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:46 +0000 UTC - event for pod-terminate-status-0-2: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-2 to ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:47 +0000 UTC - event for pod-terminate-status-0-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.957: INFO: At 2022-12-02 09:17:47 +0000 UTC - event for pod-terminate-status-0-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:47 +0000 UTC - event for pod-terminate-status-0-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:47 +0000 UTC - event for pod-terminate-status-2-2: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-2 to ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:48 +0000 UTC - event for pod-terminate-status-1-1: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-1 to ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:48 +0000 UTC - event for pod-terminate-status-2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:48 +0000 UTC - event for pod-terminate-status-2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:48 +0000 UTC - event for pod-terminate-status-2-2: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:49 +0000 UTC - event for pod-terminate-status-1-1: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:49 +0000 UTC - event for pod-terminate-status-1-1: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:49 +0000 UTC - event for pod-terminate-status-1-1: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:52 +0000 UTC - event for pod-terminate-status-0-3: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-3 to ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:53 +0000 UTC - event for pod-terminate-status-2-3: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-3 to ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:54 +0000 UTC - event for pod-terminate-status-0-3: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:54 +0000 UTC - event for pod-terminate-status-0-3: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:54 +0000 UTC - event for pod-terminate-status-0-3: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:54 +0000 UTC - event for pod-terminate-status-0-3: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Killing: Stopping container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:54 +0000 UTC - event for pod-terminate-status-1-2: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-2 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:55 +0000 UTC - event for pod-terminate-status-2-3: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:55 +0000 UTC - event for pod-terminate-status-2-3: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:56 +0000 UTC - event for pod-terminate-status-1-2: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-2c9hl" : failed to sync configmap cache: timed out waiting for the condition Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:56 +0000 UTC - event for pod-terminate-status-2-3: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:57 +0000 UTC - event for pod-terminate-status-1-2: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:57 +0000 UTC - event for pod-terminate-status-1-2: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:57 +0000 UTC - event for pod-terminate-status-1-2: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:59 +0000 UTC - event for pod-terminate-status-1-3: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-3 to ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:17:59 +0000 UTC - event for pod-terminate-status-2-4: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-4 to ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:00 +0000 UTC - event for pod-terminate-status-1-3: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:00 +0000 UTC - event for pod-terminate-status-1-3: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:00 +0000 UTC - event for pod-terminate-status-1-3: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:00 +0000 UTC - event for pod-terminate-status-2-4: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:00 +0000 UTC - event for pod-terminate-status-2-4: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:00 +0000 UTC - event for pod-terminate-status-2-4: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:02 +0000 UTC - event for pod-terminate-status-0-4: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-4 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:03 +0000 UTC - event for pod-terminate-status-0-4: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:03 +0000 UTC - event for pod-terminate-status-0-4: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:03 +0000 UTC - event for pod-terminate-status-0-4: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:05 +0000 UTC - event for pod-terminate-status-0-5: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-5 to ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:06 +0000 UTC - event for pod-terminate-status-0-5: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:06 +0000 UTC - event for pod-terminate-status-0-5: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:06 +0000 UTC - event for pod-terminate-status-0-5: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:06 +0000 UTC - event for pod-terminate-status-1-4: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-4 to ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:07 +0000 UTC - event for pod-terminate-status-1-4: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:07 +0000 UTC - event for pod-terminate-status-1-4: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:08 +0000 UTC - event for pod-terminate-status-1-4: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:09 +0000 UTC - event for pod-terminate-status-2-5: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:09 +0000 UTC - event for pod-terminate-status-2-5: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-5 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:09 +0000 UTC - event for pod-terminate-status-2-5: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:10 +0000 UTC - event for pod-terminate-status-2-5: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:11 +0000 UTC - event for pod-terminate-status-0-6: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-6 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:12 +0000 UTC - event for pod-terminate-status-0-6: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:12 +0000 UTC - event for pod-terminate-status-0-6: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:12 +0000 UTC - event for pod-terminate-status-0-6: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:12 +0000 UTC - event for pod-terminate-status-2-6: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-6 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:13 +0000 UTC - event for pod-terminate-status-2-6: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:13 +0000 UTC - event for pod-terminate-status-2-6: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:13 +0000 UTC - event for pod-terminate-status-2-6: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:14 +0000 UTC - event for pod-terminate-status-1-5: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-5 to ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:15 +0000 UTC - event for pod-terminate-status-1-5: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:15 +0000 UTC - event for pod-terminate-status-1-5: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:16 +0000 UTC - event for pod-terminate-status-0-7: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-7 to ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:16 +0000 UTC - event for pod-terminate-status-1-5: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:18 +0000 UTC - event for pod-terminate-status-0-7: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:18 +0000 UTC - event for pod-terminate-status-0-7: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:18 +0000 UTC - event for pod-terminate-status-0-7: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:20 +0000 UTC - event for pod-terminate-status-1-6: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-6 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:20 +0000 UTC - event for pod-terminate-status-2-7: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-7 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:22 +0000 UTC - event for pod-terminate-status-1-6: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:22 +0000 UTC - event for pod-terminate-status-1-6: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:22 +0000 UTC - event for pod-terminate-status-1-6: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:22 +0000 UTC - event for pod-terminate-status-2-7: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:22 +0000 UTC - event for pod-terminate-status-2-7: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:22 +0000 UTC - event for pod-terminate-status-2-7: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:23 +0000 UTC - event for pod-terminate-status-0-8: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-8 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:24 +0000 UTC - event for pod-terminate-status-0-8: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:24 +0000 UTC - event for pod-terminate-status-0-8: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:24 +0000 UTC - event for pod-terminate-status-0-8: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.961: INFO: At 2022-12-02 09:18:26 +0000 UTC - event for pod-terminate-status-1-7: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-7 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:27 +0000 UTC - event for pod-terminate-status-1-7: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:27 +0000 UTC - event for pod-terminate-status-1-7: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:27 +0000 UTC - event for pod-terminate-status-1-7: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:29 +0000 UTC - event for pod-terminate-status-0-9: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-9 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:30 +0000 UTC - event for pod-terminate-status-0-9: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:30 +0000 UTC - event for pod-terminate-status-0-9: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:30 +0000 UTC - event for pod-terminate-status-0-9: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:30 +0000 UTC - event for pod-terminate-status-1-8: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-8 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:31 +0000 UTC - event for pod-terminate-status-1-8: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:31 +0000 UTC - event for pod-terminate-status-1-8: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:31 +0000 UTC - event for pod-terminate-status-1-8: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:31 +0000 UTC - event for pod-terminate-status-2-8: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-8 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:33 +0000 UTC - event for pod-terminate-status-2-8: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:34 +0000 UTC - event for pod-terminate-status-2-8: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:34 +0000 UTC - event for pod-terminate-status-2-8: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:44 +0000 UTC - event for pod-terminate-status-0-10: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-10 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:45 +0000 UTC - event for pod-terminate-status-0-10: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:45 +0000 UTC - event for pod-terminate-status-0-10: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:45 +0000 UTC - event for pod-terminate-status-0-10: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:45 +0000 UTC - event for pod-terminate-status-2-9: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-9 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:46 +0000 UTC - event for pod-terminate-status-1-9: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-9 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:47 +0000 UTC - event for pod-terminate-status-1-9: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:47 +0000 UTC - event for pod-terminate-status-1-9: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:47 +0000 UTC - event for pod-terminate-status-1-9: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:47 +0000 UTC - event for pod-terminate-status-2-9: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:47 +0000 UTC - event for pod-terminate-status-2-9: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:47 +0000 UTC - event for pod-terminate-status-2-9: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.962: INFO: At 2022-12-02 09:18:52 +0000 UTC - event for pod-terminate-status-1-10: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-10 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:18:53 +0000 UTC - event for pod-terminate-status-1-10: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:18:53 +0000 UTC - event for pod-terminate-status-1-10: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:18:53 +0000 UTC - event for pod-terminate-status-1-10: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:18:57 +0000 UTC - event for pod-terminate-status-0-11: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-11 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:18:59 +0000 UTC - event for pod-terminate-status-0-11: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:18:59 +0000 UTC - event for pod-terminate-status-0-11: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:18:59 +0000 UTC - event for pod-terminate-status-0-11: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:01 +0000 UTC - event for pod-terminate-status-1-11: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-11 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:03 +0000 UTC - event for pod-terminate-status-1-11: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:03 +0000 UTC - event for pod-terminate-status-1-11: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:03 +0000 UTC - event for pod-terminate-status-1-11: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:06 +0000 UTC - event for pod-terminate-status-2-10: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-10 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:07 +0000 UTC - event for pod-terminate-status-1-12: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-12 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:07 +0000 UTC - event for pod-terminate-status-2-10: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:07 +0000 UTC - event for pod-terminate-status-2-10: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:07 +0000 UTC - event for pod-terminate-status-2-10: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:08 +0000 UTC - event for pod-terminate-status-1-12: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:08 +0000 UTC - event for pod-terminate-status-1-12: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:09 +0000 UTC - event for pod-terminate-status-1-12: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:12 +0000 UTC - event for pod-terminate-status-2-11: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-11 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:14 +0000 UTC - event for pod-terminate-status-2-11: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:14 +0000 UTC - event for pod-terminate-status-2-11: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:14 +0000 UTC - event for pod-terminate-status-2-11: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:15 +0000 UTC - event for pod-terminate-status-0-12: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-12 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:15 +0000 UTC - event for pod-terminate-status-2-12: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-12 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:16 +0000 UTC - event for pod-terminate-status-0-12: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:16 +0000 UTC - event for pod-terminate-status-0-12: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:16 +0000 UTC - event for pod-terminate-status-0-12: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:17 +0000 UTC - event for pod-terminate-status-1-13: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-13 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:17 +0000 UTC - event for pod-terminate-status-2-12: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:17 +0000 UTC - event for pod-terminate-status-2-12: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:18 +0000 UTC - event for pod-terminate-status-1-13: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:18 +0000 UTC - event for pod-terminate-status-1-13: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:18 +0000 UTC - event for pod-terminate-status-1-13: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:18 +0000 UTC - event for pod-terminate-status-2-12: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:21 +0000 UTC - event for pod-terminate-status-2-13: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-13 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:22 +0000 UTC - event for pod-terminate-status-0-13: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-13 to ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:23 +0000 UTC - event for pod-terminate-status-2-13: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:23 +0000 UTC - event for pod-terminate-status-2-13: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:23 +0000 UTC - event for pod-terminate-status-2-13: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:24 +0000 UTC - event for pod-terminate-status-1-14: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-1-14 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:25 +0000 UTC - event for pod-terminate-status-0-13: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:25 +0000 UTC - event for pod-terminate-status-0-13: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:25 +0000 UTC - event for pod-terminate-status-0-13: {kubelet ip-172-20-49-67.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:25 +0000 UTC - event for pod-terminate-status-1-14: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:25 +0000 UTC - event for pod-terminate-status-1-14: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:25 +0000 UTC - event for pod-terminate-status-1-14: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:29 +0000 UTC - event for pod-terminate-status-2-14: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-2-14 to ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:30 +0000 UTC - event for pod-terminate-status-2-14: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:30 +0000 UTC - event for pod-terminate-status-2-14: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:19:30 +0000 UTC - event for pod-terminate-status-2-14: {kubelet ip-172-20-60-164.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:20:41 +0000 UTC - event for pod-terminate-status-0-14: {default-scheduler } Scheduled: Successfully assigned pods-5116/pod-terminate-status-0-14 to ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:49.963: INFO: At 2022-12-02 09:20:43 +0000 UTC - event for pod-terminate-status-0-14: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-kjgfn" : failed to sync configmap cache: timed out waiting for the condition Dec 2 09:20:49.963: INFO: At 2022-12-02 09:20:44 +0000 UTC - event for pod-terminate-status-0-14: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:49.963: INFO: At 2022-12-02 09:20:45 +0000 UTC - event for pod-terminate-status-0-14: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Created: Created container fail Dec 2 09:20:49.963: INFO: At 2022-12-02 09:20:45 +0000 UTC - event for pod-terminate-status-0-14: {kubelet ip-172-20-37-90.ap-southeast-1.compute.internal} Started: Started container fail Dec 2 09:20:50.568: INFO: POD NODE PHASE GRACE CONDITIONS Dec 2 09:20:50.576: INFO: pod-terminate-status-2-14 ip-172-20-60-164.ap-southeast-1.compute.internal Failed [{Initialized False 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:19:29 +0000 UTC ContainersNotInitialized containers with incomplete status: [fail]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:19:29 +0000 UTC PodFailed } {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:19:29 +0000 UTC PodFailed } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:19:29 +0000 UTC }] Dec 2 09:20:50.606: INFO: Dec 2 09:20:51.106: INFO: Unable to fetch pods-5116/pod-terminate-status-2-14/blocked logs: the server rejected our request for an unknown reason (get pods pod-terminate-status-2-14) Dec 2 09:20:51.478: INFO: Logging node info for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:52.098: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-34-182.ap-southeast-1.compute.internal fd7593c8-1a7c-4e6d-9018-4c36698568dc 38632 0 2022-12-02 09:02:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-34-182.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-34-182.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7299":"csi-mock-csi-mock-volumes-7299","ebs.csi.aws.com":"i-070fdf3c5d5f93304"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.34.182/19 projectcalico.org/IPv4IPIPTunnelAddr:100.116.72.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {Go-http-client Update v1 2022-12-02 09:03:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-070fdf3c5d5f93304,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:22 +0000 UTC,LastTransitionTime:2022-12-02 09:03:22 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:03:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.34.182,},NodeAddress{Type:ExternalIP,Address:54.169.57.14,},NodeAddress{Type:Hostname,Address:ip-172-20-34-182.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-34-182.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-169-57-14.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec264a17458d690f294e12b6a6b2138c,SystemUUID:ec264a17-458d-690f-294e-12b6a6b2138c,BootID:37b6e011-229a-4491-b86f-f149d97d10c0,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:20:52.306: INFO: Logging kubelet events for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:52.926: INFO: Logging pods the kubelet thinks is on node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:58.425: INFO: ss2-2 started at 2022-12-02 09:20:43 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:58.463: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:58.473: INFO: hostexec-ip-172-20-34-182.ap-southeast-1.compute.internal-fkcf6 started at 2022-12-02 09:20:44 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:58.486: INFO: Container agnhost-container ready: true, restart count 0 Dec 2 09:20:58.486: INFO: startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c started at 2022-12-02 09:16:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:58.486: INFO: Container busybox ready: false, restart count 0 Dec 2 09:20:58.486: INFO: calico-node-xhqfx started at 2022-12-02 09:02:23 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:58.486: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:58.491: INFO: Init container install-cni ready: true, restart count 0 Dec 2 09:20:58.491: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:58.491: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:58.491: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:58.491: INFO: test-ss-0 started at 2022-12-02 09:17:26 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:58.491: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:58.491: INFO: ebs-csi-node-4b4zl started at 2022-12-02 09:02:23 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:58.491: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:58.491: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:58.491: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:58.491: INFO: pod-service-account-nomountsa started at 2022-12-02 09:20:53 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:58.491: INFO: Container token-test ready: false, restart count 0 Dec 2 09:20:58.491: INFO: kube-proxy-ip-172-20-34-182.ap-southeast-1.compute.internal started at 2022-12-02 09:02:02 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:58.491: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:58.491: INFO: csi-mockplugin-0 started at 2022-12-02 09:18:23 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:58.491: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:58.491: INFO: Container driver-registrar ready: true, restart count 0 Dec 2 09:20:58.491: INFO: Container mock ready: true, restart count 0 Dec 2 09:20:58.491: INFO: coredns-5556cb978d-bx2m5 started at 2022-12-02 09:03:10 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:58.491: INFO: Container coredns ready: true, restart count 0 Dec 2 09:20:58.491: INFO: pod-service-account-defaultsa started at 2022-12-02 09:20:51 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:58.491: INFO: Container token-test ready: true, restart count 0 Dec 2 09:25:47.240: INFO: Latency metrics for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:25:47.243: INFO: Logging node info for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:25:48.102: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-90.ap-southeast-1.compute.internal f779b12d-0e95-4e7f-929e-368941a29b99 41784 0 2022-12-02 09:02:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-90.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-37-90.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-001dd83f455b4a895"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.37.90/19 projectcalico.org/IPv4IPIPTunnelAddr:100.114.18.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:02:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-02 09:20:50 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-12-02 09:20:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-001dd83f455b4a895,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:05 +0000 UTC,LastTransitionTime:2022-12-02 09:03:05 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:58 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:58 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:58 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:20:58 +0000 UTC,LastTransitionTime:2022-12-02 09:02:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.90,},NodeAddress{Type:ExternalIP,Address:13.212.195.103,},NodeAddress{Type:Hostname,Address:ip-172-20-37-90.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-90.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-212-195-103.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec216e9b184e3e44fb8ed6af9b651047,SystemUUID:ec216e9b-184e-3e44-fb8e-d6af9b651047,BootID:0bbb1eb8-60c7-4bb1-b8c7-bb110f238f78,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0908a80c21068b13b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0908a80c21068b13b,DevicePath:,},},Config:nil,},} Dec 2 09:25:48.106: INFO: Logging kubelet events for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:25:48.322: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:25:48.968: INFO: coredns-autoscaler-85fcbbb64-kb6k7 started at 2022-12-02 09:02:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:48.968: INFO: Container autoscaler ready: true, restart count 0 Dec 2 09:25:48.968: INFO: kube-proxy-ip-172-20-37-90.ap-southeast-1.compute.internal started at 2022-12-02 09:01:54 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:48.968: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:25:48.968: INFO: calico-node-cqg7n started at 2022-12-02 09:02:04 +0000 UTC (4+1 container statuses recorded) Dec 2 09:25:48.968: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:25:48.968: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:25:48.968: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:25:48.968: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:25:48.968: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:25:48.968: INFO: httpd started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:48.968: INFO: Container httpd ready: true, restart count 0 Dec 2 09:25:48.968: INFO: ss2-0 started at 2022-12-02 09:20:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:48.968: INFO: Container webserver ready: true, restart count 0 Dec 2 09:25:48.968: INFO: coredns-5556cb978d-pztr5 started at 2022-12-02 09:02:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:48.968: INFO: Container coredns ready: true, restart count 0 Dec 2 09:25:48.968: INFO: ebs-csi-node-vswvn started at 2022-12-02 09:02:04 +0000 UTC (0+3 container statuses recorded) Dec 2 09:25:48.968: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:25:48.968: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:25:48.968: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:25:48.968: INFO: test-ss-1 started at 2022-12-02 09:18:26 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:48.968: INFO: Container webserver ready: true, restart count 0 Dec 2 09:25:48.968: INFO: agnhost-primary-dgxqj started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:48.968: INFO: Container agnhost-primary ready: true, restart count 0 Dec 2 09:25:48.968: INFO: execpodws7zw started at 2022-12-02 09:20:42 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:48.968: INFO: Container agnhost-container ready: true, restart count 0 Dec 2 09:25:48.968: INFO: ss-1 started at 2022-12-02 09:20:47 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:48.968: INFO: Container webserver ready: false, restart count 0 Dec 2 09:25:48.968: INFO: test-webserver-98190dda-eab4-4a0b-a4ec-afbb6264f9c0 started at 2022-12-02 09:18:17 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:48.968: INFO: Container test-webserver ready: true, restart count 0 Dec 2 09:25:49.711: INFO: Latency metrics for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:25:49.712: INFO: Logging node info for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:25:49.943: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-49-67.ap-southeast-1.compute.internal 81600d2c-3d2a-4421-913e-e1c53c1ad1df 41217 0 2022-12-02 09:02:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a io.kubernetes.storage.mock/node:some-mock-node kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-49-67.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-49-67.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1102":"ip-172-20-49-67.ap-southeast-1.compute.internal","ebs.csi.aws.com":"i-056f60b74d454bea7"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.49.67/19 projectcalico.org/IPv4IPIPTunnelAddr:100.106.24.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:03:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-02 09:18:47 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-12-02 09:18:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-056f60b74d454bea7,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:19 +0000 UTC,LastTransitionTime:2022-12-02 09:03:19 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:03:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.49.67,},NodeAddress{Type:ExternalIP,Address:13.228.79.89,},NodeAddress{Type:Hostname,Address:ip-172-20-49-67.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-49-67.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-228-79-89.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2bd833fc2a274ccf3bf225f245ddce,SystemUUID:ec2bd833-fc2a-274c-cf3b-f225f245ddce,BootID:1ab59414-4d0c-4bc8-bb64-5f41a1b02c74,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[docker.io/library/nginx@sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba docker.io/library/nginx:latest],SizeBytes:56833911,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:dd6d57960dc104a4ee0fa7c58c6faa3e38725561af374c17f8cb905f7f73ba66 k8s.gcr.io/build-image/debian-iptables:bullseye-v1.1.0],SizeBytes:27059231,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-086a725fe4e89b13b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-086a725fe4e89b13b,DevicePath:,},},Config:nil,},} Dec 2 09:25:49.944: INFO: Logging kubelet events for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:25:50.203: INFO: Logging pods the kubelet thinks is on node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:25:50.637: INFO: csi-hostpathplugin-0 started at 2022-12-02 09:18:29 +0000 UTC (0+7 container statuses recorded) Dec 2 09:25:50.637: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:25:50.637: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:25:50.637: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:25:50.637: INFO: Container csi-snapshotter ready: true, restart count 0 Dec 2 09:25:50.637: INFO: Container hostpath ready: true, restart count 0 Dec 2 09:25:50.637: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:25:50.637: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:25:50.638: INFO: private started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:50.638: INFO: Container cntr ready: true, restart count 0 Dec 2 09:25:50.638: INFO: externalsvc-gfw8b started at 2022-12-02 09:19:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:50.638: INFO: Container externalsvc ready: true, restart count 0 Dec 2 09:25:50.638: INFO: slave started at 2022-12-02 09:19:22 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:50.638: INFO: Container cntr ready: true, restart count 0 Dec 2 09:25:50.638: INFO: svc-latency-rc-n6rnr started at 2022-12-02 09:19:15 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:50.638: INFO: Container svc-latency-rc ready: true, restart count 0 Dec 2 09:25:50.638: INFO: calico-node-n6lj9 started at 2022-12-02 09:02:20 +0000 UTC (4+1 container statuses recorded) Dec 2 09:25:50.638: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:25:50.638: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:25:50.638: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:25:50.638: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:25:50.638: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:25:50.638: INFO: master started at 2022-12-02 09:19:13 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:50.638: INFO: Container cntr ready: true, restart count 0 Dec 2 09:25:50.638: INFO: kube-proxy-ip-172-20-49-67.ap-southeast-1.compute.internal started at 2022-12-02 09:01:59 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:50.638: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:25:50.638: INFO: pod-subpath-test-inlinevolume-h277 started at 2022-12-02 09:25:50 +0000 UTC (1+2 container statuses recorded) Dec 2 09:25:50.638: INFO: Init container test-init-subpath-inlinevolume-h277 ready: false, restart count 0 Dec 2 09:25:50.638: INFO: Container test-container-subpath-inlinevolume-h277 ready: false, restart count 0 Dec 2 09:25:50.638: INFO: Container test-container-volume-inlinevolume-h277 ready: false, restart count 0 Dec 2 09:25:50.638: INFO: hostexec-ip-172-20-49-67.ap-southeast-1.compute.internal-sfzxd started at 2022-12-02 09:25:50 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:50.638: INFO: Container agnhost-container ready: false, restart count 0 Dec 2 09:25:50.638: INFO: ebs-csi-node-w9kzj started at 2022-12-02 09:02:20 +0000 UTC (0+3 container statuses recorded) Dec 2 09:25:50.638: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:25:50.638: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:25:50.638: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:25:50.638: INFO: default started at 2022-12-02 09:20:44 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:50.638: INFO: Container cntr ready: true, restart count 0 Dec 2 09:25:51.410: INFO: Latency metrics for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:25:51.410: INFO: Logging node info for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:25:51.674: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-55-194.ap-southeast-1.compute.internal 890854e9-f510-402d-9886-49c1d41318f4 42325 0 2022-12-02 09:00:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:master-ap-southeast-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-55-194.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-00b46fae03d775a19"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.55.194/19 projectcalico.org/IPv4IPIPTunnelAddr:100.104.201.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-02 09:00:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2022-12-02 09:01:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2022-12-02 09:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2022-12-02 09:01:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {Go-http-client Update v1 2022-12-02 09:02:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:02:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-00b46fae03d775a19,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3894931456 0} {<nil>} 3803644Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790073856 0} {<nil>} 3701244Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:02:00 +0000 UTC,LastTransitionTime:2022-12-02 09:02:00 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:22:52 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:22:52 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:22:52 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:22:52 +0000 UTC,LastTransitionTime:2022-12-02 09:01:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.194,},NodeAddress{Type:ExternalIP,Address:54.169.84.77,},NodeAddress{Type:Hostname,Address:ip-172-20-55-194.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-55-194.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-169-84-77.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2521391aeba8d2805b54ac578aa7d0,SystemUUID:ec252139-1aeb-a8d2-805b-54ac578aa7d0,BootID:4e785fe8-5068-4fd6-b8b0-5a4aae03c815,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:256a64fb44876d270f04ada1afd3ca431341f249aa52cbe2b3780f8f23961142 registry.k8s.io/etcdadm/etcd-manager:v3.0.20220727],SizeBytes:216364516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.14],SizeBytes:136567243,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.14],SizeBytes:126380852,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.14],SizeBytes:54860595,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:58cc91c551e9e941a752e205eefed1c8da56f97a51e054b3d341b67bb7bf27eb docker.io/calico/kube-controllers:v3.23.5],SizeBytes:53774679,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.24.5],SizeBytes:41269276,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.24.5],SizeBytes:40816784,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.24.5],SizeBytes:5130223,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:25:51.676: INFO: Logging kubelet events for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:25:51.909: INFO: Logging pods the kubelet thinks is on node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:25:52.126: INFO: ebs-csi-node-rfwfq started at 2022-12-02 09:01:32 +0000 UTC (0+3 container statuses recorded) Dec 2 09:25:52.126: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:25:52.126: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:25:52.126: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:25:52.126: INFO: dns-controller-847484c97f-z8rs4 started at 2022-12-02 09:01:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:52.126: INFO: Container dns-controller ready: true, restart count 0 Dec 2 09:25:52.126: INFO: calico-kube-controllers-795c657547-9mz5t started at 2022-12-02 09:01:48 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:52.126: INFO: Container calico-kube-controllers ready: true, restart count 0 Dec 2 09:25:52.126: INFO: kube-controller-manager-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:52.126: INFO: Container kube-controller-manager ready: true, restart count 2 Dec 2 09:25:52.126: INFO: kube-proxy-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:52.126: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:25:52.126: INFO: kube-scheduler-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:52.126: INFO: Container kube-scheduler ready: true, restart count 0 Dec 2 09:25:52.126: INFO: calico-node-xfrb9 started at 2022-12-02 09:01:32 +0000 UTC (4+1 container statuses recorded) Dec 2 09:25:52.126: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:25:52.126: INFO: Init container install-cni ready: true, restart count 0 Dec 2 09:25:52.126: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:25:52.126: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:25:52.126: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:25:52.126: INFO: kops-controller-7l85j started at 2022-12-02 09:01:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:52.126: INFO: Container kops-controller ready: true, restart count 0 Dec 2 09:25:52.126: INFO: etcd-manager-events-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:52.126: INFO: Container etcd-manager ready: true, restart count 0 Dec 2 09:25:52.126: INFO: etcd-manager-main-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:52.126: INFO: Container etcd-manager ready: true, restart count 0 Dec 2 09:25:52.126: INFO: kube-apiserver-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+2 container statuses recorded) Dec 2 09:25:52.126: INFO: Container healthcheck ready: true, restart count 0 Dec 2 09:25:52.126: INFO: Container kube-apiserver ready: true, restart count 1 Dec 2 09:25:52.126: INFO: ebs-csi-controller-55c8659c7c-sqq7m started at 2022-12-02 09:01:32 +0000 UTC (0+5 container statuses recorded) Dec 2 09:25:52.126: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:25:52.126: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:25:52.126: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:25:52.126: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:25:52.126: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:25:52.914: INFO: Latency metrics for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:25:52.915: INFO: Logging node info for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:25:53.127: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-60-164.ap-southeast-1.compute.internal 4d06e01c-27c4-4c2f-b118-647413c7ddf6 42960 0 2022-12-02 09:02:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-60-164.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-60-164.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-0a7cd257efff997b0"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.60.164/19 projectcalico.org/IPv4IPIPTunnelAddr:100.106.61.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:02:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-02 09:17:54 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-12-02 09:17:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-0a7cd257efff997b0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:11 +0000 UTC,LastTransitionTime:2022-12-02 09:03:11 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:24:43 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:24:43 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:24:43 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:24:43 +0000 UTC,LastTransitionTime:2022-12-02 09:02:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.60.164,},NodeAddress{Type:ExternalIP,Address:13.212.105.239,},NodeAddress{Type:Hostname,Address:ip-172-20-60-164.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-60-164.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-212-105-239.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ab9d0d1126900acfd3b82032bd9b,SystemUUID:ec28ab9d-0d11-2690-0acf-d3b82032bd9b,BootID:925eb9d6-3c66-49ad-be43-0411968ca10c,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[docker.io/library/nginx@sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba docker.io/library/nginx:latest],SizeBytes:56833911,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0413aef78391fa3e6],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0413aef78391fa3e6,DevicePath:,},},Config:nil,},} Dec 2 09:25:53.128: INFO: Logging kubelet events for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:25:53.346: INFO: Logging pods the kubelet thinks is on node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:25:53.569: INFO: hostexec-ip-172-20-60-164.ap-southeast-1.compute.internal-qrptd started at 2022-12-02 09:20:43 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:53.569: INFO: Container agnhost-container ready: true, restart count 0 Dec 2 09:25:53.569: INFO: externalsvc-kc489 started at 2022-12-02 09:19:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:53.569: INFO: Container externalsvc ready: true, restart count 0 Dec 2 09:25:53.569: INFO: kube-proxy-ip-172-20-60-164.ap-southeast-1.compute.internal started at 2022-12-02 09:01:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:53.569: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:25:53.569: INFO: calico-node-gv4lf started at 2022-12-02 09:02:06 +0000 UTC (4+1 container statuses recorded) Dec 2 09:25:53.569: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:25:53.569: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:25:53.569: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:25:53.569: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:25:53.569: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:25:53.569: INFO: rs-7mmwg started at <nil> (0+0 container statuses recorded) Dec 2 09:25:53.569: INFO: ss2-1 started at 2022-12-02 09:19:19 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:53.569: INFO: Container webserver ready: true, restart count 0 Dec 2 09:25:53.569: INFO: pod-terminate-status-2-14 started at 2022-12-02 09:19:29 +0000 UTC (1+1 container statuses recorded) Dec 2 09:25:53.569: INFO: Init container fail ready: false, restart count 0 Dec 2 09:25:53.569: INFO: Container blocked ready: false, restart count 0 Dec 2 09:25:53.569: INFO: pod-subpath-test-preprovisionedpv-wbdt started at 2022-12-02 09:25:47 +0000 UTC (2+2 container statuses recorded) Dec 2 09:25:53.569: INFO: Init container init-volume-preprovisionedpv-wbdt ready: true, restart count 0 Dec 2 09:25:53.569: INFO: Init container test-init-subpath-preprovisionedpv-wbdt ready: true, restart count 0 Dec 2 09:25:53.569: INFO: Container test-container-subpath-preprovisionedpv-wbdt ready: false, restart count 0 Dec 2 09:25:53.569: INFO: Container test-container-volume-preprovisionedpv-wbdt ready: false, restart count 0 Dec 2 09:25:53.569: INFO: rs-8xgsd started at <nil> (0+0 container statuses recorded) Dec 2 09:25:53.569: INFO: ebs-csi-node-lrwc5 started at 2022-12-02 09:02:06 +0000 UTC (0+3 container statuses recorded) Dec 2 09:25:53.569: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:25:53.569: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:25:53.569: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:25:53.569: INFO: rs-cmvfm started at <nil> (0+0 container statuses recorded) Dec 2 09:25:53.569: INFO: external-client started at 2022-12-02 09:19:27 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:53.569: INFO: Container external-client ready: true, restart count 0 Dec 2 09:25:53.569: INFO: hostexec-ip-172-20-60-164.ap-southeast-1.compute.internal-vzvct started at 2022-12-02 09:25:49 +0000 UTC (0+1 container statuses recorded) Dec 2 09:25:53.569: INFO: Container agnhost-container ready: true, restart count 0 Dec 2 09:25:54.387: INFO: Latency metrics for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:25:54.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-5116" for this suite.
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-node\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\sby\sliveness\sprobe\sbecause\sstartup\sprobe\sdelays\sit$'
/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:353 Dec 2 09:20:40.904: getting pod Unexpected error: <*url.Error | 0xc004d86030>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-3005/pods/startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c", Err: { s: "http2: client connection lost", }, } Get "https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-3005/pods/startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c": http2: client connection lost occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:921from junit_24.xml
[BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Dec 2 09:16:31.405: INFO: >>> kubeConfig: /root/.kube/config �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted by liveness probe because startup probe delays it /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:353 �[1mSTEP�[0m: Creating pod startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c in namespace container-probe-3005 Dec 2 09:16:43.531: INFO: Started pod startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c in namespace container-probe-3005 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Dec 2 09:16:43.742: INFO: Initial restart count of pod startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c is 0 Dec 2 09:20:40.898: FAIL: getting pod Unexpected error: <*url.Error | 0xc004d86030>: { Op: "Get", URL: "https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-3005/pods/startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c", Err: { s: "http2: client connection lost", }, } Get "https://api.e2e-e2e-kops-grid-calico-flatcar-k23-ko24.test-cncf-aws.k8s.io/api/v1/namespaces/container-probe-3005/pods/startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c": http2: client connection lost occurred Full Stack Trace k8s.io/kubernetes/test/e2e/common/node.RunLivenessTest(0xc000a522c0, 0xc00245bc00, 0x0, 0xc002efe150) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:921 +0x8f7 k8s.io/kubernetes/test/e2e/common/node.glob..func2.16() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:374 +0x1be k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0006211e0, 0x735d4a0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 �[1mSTEP�[0m: Collecting events from namespace "container-probe-3005". �[1mSTEP�[0m: Found 5 events. Dec 2 09:20:41.716: INFO: At 2022-12-02 09:16:33 +0000 UTC - event for startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c: {default-scheduler } Scheduled: Successfully assigned container-probe-3005/startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c to ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:41.716: INFO: At 2022-12-02 09:16:34 +0000 UTC - event for startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Pulled: Container image "k8s.gcr.io/e2e-test-images/busybox:1.29-2" already present on machine Dec 2 09:20:41.716: INFO: At 2022-12-02 09:16:34 +0000 UTC - event for startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Created: Created container busybox Dec 2 09:20:41.716: INFO: At 2022-12-02 09:16:34 +0000 UTC - event for startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Started: Started container busybox Dec 2 09:20:41.716: INFO: At 2022-12-02 09:16:54 +0000 UTC - event for startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c: {kubelet ip-172-20-34-182.ap-southeast-1.compute.internal} Unhealthy: Startup probe failed: Dec 2 09:20:41.948: INFO: POD NODE PHASE GRACE CONDITIONS Dec 2 09:20:41.949: INFO: startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c ip-172-20-34-182.ap-southeast-1.compute.internal Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:16:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:16:33 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:16:33 +0000 UTC ContainersNotReady containers with unready status: [busybox]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-12-02 09:16:33 +0000 UTC }] Dec 2 09:20:41.951: INFO: Dec 2 09:20:42.626: INFO: Logging node info for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:42.839: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-34-182.ap-southeast-1.compute.internal fd7593c8-1a7c-4e6d-9018-4c36698568dc 38632 0 2022-12-02 09:02:23 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-34-182.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-34-182.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-mock-csi-mock-volumes-7299":"csi-mock-csi-mock-volumes-7299","ebs.csi.aws.com":"i-070fdf3c5d5f93304"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.34.182/19 projectcalico.org/IPv4IPIPTunnelAddr:100.116.72.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kube-controller-manager Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.4.0/24\"":{}}}} } {kubelet Update v1 2022-12-02 09:02:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {Go-http-client Update v1 2022-12-02 09:03:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:18:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.4.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-070fdf3c5d5f93304,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:22 +0000 UTC,LastTransitionTime:2022-12-02 09:03:22 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:02:02 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:18:13 +0000 UTC,LastTransitionTime:2022-12-02 09:03:03 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.34.182,},NodeAddress{Type:ExternalIP,Address:54.169.57.14,},NodeAddress{Type:Hostname,Address:ip-172-20-34-182.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-34-182.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-169-57-14.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec264a17458d690f294e12b6a6b2138c,SystemUUID:ec264a17-458d-690f-294e-12b6a6b2138c,BootID:37b6e011-229a-4491-b86f-f149d97d10c0,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:20:42.840: INFO: Logging kubelet events for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:43.070: INFO: Logging pods the kubelet thinks is on node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:43.499: INFO: simpletest.rc-swnct started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.499: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.499: INFO: simpletest.rc-tfx9v started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.499: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.499: INFO: simpletest.rc-rlzhz started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.499: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.499: INFO: simpletest.rc-ntn9m started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.499: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.499: INFO: calico-node-xhqfx started at 2022-12-02 09:02:23 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:43.499: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:43.499: INFO: Init container install-cni ready: true, restart count 0 Dec 2 09:20:43.499: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:43.499: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:43.499: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:43.499: INFO: startup-adfc80b5-fb75-4cdc-9c89-572b3c11ff5c started at 2022-12-02 09:16:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.499: INFO: Container busybox ready: false, restart count 0 Dec 2 09:20:43.499: INFO: test-ss-0 started at 2022-12-02 09:17:26 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.499: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:43.499: INFO: kube-proxy-ip-172-20-34-182.ap-southeast-1.compute.internal started at 2022-12-02 09:02:02 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.499: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:43.499: INFO: ebs-csi-node-4b4zl started at 2022-12-02 09:02:23 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:43.499: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:43.499: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:43.499: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:43.499: INFO: simpletest.rc-rptqs started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.499: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:43.499: INFO: pod-client started at 2022-12-02 09:19:00 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.499: INFO: Container pod-client ready: true, restart count 0 Dec 2 09:20:43.499: INFO: coredns-5556cb978d-bx2m5 started at 2022-12-02 09:03:10 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.499: INFO: Container coredns ready: true, restart count 0 Dec 2 09:20:43.499: INFO: csi-mockplugin-0 started at 2022-12-02 09:18:23 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:43.499: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:43.499: INFO: Container driver-registrar ready: true, restart count 0 Dec 2 09:20:43.499: INFO: Container mock ready: true, restart count 0 Dec 2 09:20:43.499: INFO: ss2-2 started at 2022-12-02 09:19:22 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.499: INFO: Container webserver ready: false, restart count 0 Dec 2 09:20:43.499: INFO: simpletest.rc-w9lsq started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:43.499: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.239: INFO: Latency metrics for node ip-172-20-34-182.ap-southeast-1.compute.internal Dec 2 09:20:44.239: INFO: Logging node info for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:44.487: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-37-90.ap-southeast-1.compute.internal f779b12d-0e95-4e7f-929e-368941a29b99 40279 0 2022-12-02 09:02:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-37-90.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-37-90.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-001dd83f455b4a895"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.37.90/19 projectcalico.org/IPv4IPIPTunnelAddr:100.114.18.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:02:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.1.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:19:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.1.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-001dd83f455b4a895,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:05 +0000 UTC,LastTransitionTime:2022-12-02 09:03:05 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:01:54 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:19:15 +0000 UTC,LastTransitionTime:2022-12-02 09:02:55 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.37.90,},NodeAddress{Type:ExternalIP,Address:13.212.195.103,},NodeAddress{Type:Hostname,Address:ip-172-20-37-90.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-37-90.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-212-195-103.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec216e9b184e3e44fb8ed6af9b651047,SystemUUID:ec216e9b-184e-3e44-fb8e-d6af9b651047,BootID:0bbb1eb8-60c7-4bb1-b8c7-bb110f238f78,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e registry.k8s.io/coredns/coredns:v1.8.6],SizeBytes:13585107,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:20:44.487: INFO: Logging kubelet events for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:44.701: INFO: Logging pods the kubelet thinks is on node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:44.919: INFO: coredns-5556cb978d-pztr5 started at 2022-12-02 09:02:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container coredns ready: true, restart count 0 Dec 2 09:20:44.919: INFO: simpletest.rc-xqqbd started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.919: INFO: ebs-csi-node-vswvn started at 2022-12-02 09:02:04 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:44.919: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:44.919: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:44.919: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:44.919: INFO: test-ss-1 started at 2022-12-02 09:18:26 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:44.919: INFO: agnhost-primary-dgxqj started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container agnhost-primary ready: false, restart count 0 Dec 2 09:20:44.919: INFO: pod-secrets-0da0406d-ca0f-4f4d-84a5-33a16c483cff started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container secret-volume-test ready: false, restart count 0 Dec 2 09:20:44.919: INFO: pod-terminate-status-0-14 started at 2022-12-02 09:20:41 +0000 UTC (1+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Init container fail ready: false, restart count 0 Dec 2 09:20:44.919: INFO: Container blocked ready: false, restart count 0 Dec 2 09:20:44.919: INFO: execpodws7zw started at 2022-12-02 09:20:42 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container agnhost-container ready: false, restart count 0 Dec 2 09:20:44.919: INFO: simpletest.rc-zj2ft started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.919: INFO: test-webserver-98190dda-eab4-4a0b-a4ec-afbb6264f9c0 started at 2022-12-02 09:18:17 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container test-webserver ready: true, restart count 0 Dec 2 09:20:44.919: INFO: coredns-autoscaler-85fcbbb64-kb6k7 started at 2022-12-02 09:02:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container autoscaler ready: true, restart count 0 Dec 2 09:20:44.919: INFO: simpletest.rc-njxsz started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.919: INFO: kube-proxy-ip-172-20-37-90.ap-southeast-1.compute.internal started at 2022-12-02 09:01:54 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:44.919: INFO: calico-node-cqg7n started at 2022-12-02 09:02:04 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:44.919: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:20:44.919: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:44.919: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:44.919: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:44.919: INFO: httpd started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container httpd ready: false, restart count 0 Dec 2 09:20:44.919: INFO: bin-falseb55ef21a-ef68-4260-9830-7a34a8977c97 started at 2022-12-02 09:20:42 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container bin-falseb55ef21a-ef68-4260-9830-7a34a8977c97 ready: false, restart count 0 Dec 2 09:20:44.919: INFO: simpletest.rc-r9d9b started at 2022-12-02 09:18:34 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:44.919: INFO: simpletest.rc-t5ztv started at 2022-12-02 09:18:31 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:44.919: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.108: INFO: Latency metrics for node ip-172-20-37-90.ap-southeast-1.compute.internal Dec 2 09:20:46.108: INFO: Logging node info for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:46.319: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-49-67.ap-southeast-1.compute.internal 81600d2c-3d2a-4421-913e-e1c53c1ad1df 41217 0 2022-12-02 09:02:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a io.kubernetes.storage.mock/node:some-mock-node kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-49-67.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-49-67.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-1102":"ip-172-20-49-67.ap-southeast-1.compute.internal","ebs.csi.aws.com":"i-056f60b74d454bea7"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.49.67/19 projectcalico.org/IPv4IPIPTunnelAddr:100.106.24.64 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:03:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.3.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-02 09:18:47 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-12-02 09:18:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:io.kubernetes.storage.mock/node":{},"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.3.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-056f60b74d454bea7,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:19 +0000 UTC,LastTransitionTime:2022-12-02 09:03:19 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:01:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:20:44 +0000 UTC,LastTransitionTime:2022-12-02 09:03:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.49.67,},NodeAddress{Type:ExternalIP,Address:13.228.79.89,},NodeAddress{Type:Hostname,Address:ip-172-20-49-67.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-49-67.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-228-79-89.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2bd833fc2a274ccf3bf225f245ddce,SystemUUID:ec2bd833-fc2a-274c-cf3b-f225f245ddce,BootID:1ab59414-4d0c-4bc8-bb64-5f41a1b02c74,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[docker.io/library/nginx@sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba docker.io/library/nginx:latest],SizeBytes:56833911,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/build-image/debian-iptables@sha256:dd6d57960dc104a4ee0fa7c58c6faa3e38725561af374c17f8cb905f7f73ba66 k8s.gcr.io/build-image/debian-iptables:bullseye-v1.1.0],SizeBytes:27059231,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-086a725fe4e89b13b],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-086a725fe4e89b13b,DevicePath:,},},Config:nil,},} Dec 2 09:20:46.320: INFO: Logging kubelet events for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:46.539: INFO: Logging pods the kubelet thinks is on node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:46.765: INFO: ebs-csi-node-w9kzj started at 2022-12-02 09:02:20 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:46.765: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:46.765: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:46.765: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:46.765: INFO: default started at 2022-12-02 09:20:44 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:46.765: INFO: csi-hostpathplugin-0 started at 2022-12-02 09:18:29 +0000 UTC (0+7 container statuses recorded) Dec 2 09:20:46.765: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:20:46.765: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:46.765: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:20:46.765: INFO: Container csi-snapshotter ready: true, restart count 0 Dec 2 09:20:46.765: INFO: Container hostpath ready: true, restart count 0 Dec 2 09:20:46.765: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:46.765: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:46.765: INFO: simpletest.rc-nxlcw started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.765: INFO: private started at 2022-12-02 09:20:41 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:46.765: INFO: externalsvc-gfw8b started at 2022-12-02 09:19:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container externalsvc ready: true, restart count 0 Dec 2 09:20:46.765: INFO: simpletest.rc-s8s8z started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.765: INFO: svc-latency-rc-n6rnr started at 2022-12-02 09:19:15 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container svc-latency-rc ready: true, restart count 0 Dec 2 09:20:46.765: INFO: calico-node-n6lj9 started at 2022-12-02 09:02:20 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:46.765: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:20:46.765: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:46.765: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:46.765: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:46.765: INFO: master started at 2022-12-02 09:19:13 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:46.765: INFO: downwardapi-volume-e3f86704-2ad4-4471-80f7-f49d1890acfa started at 2022-12-02 09:20:44 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container client-container ready: false, restart count 0 Dec 2 09:20:46.765: INFO: simpletest.rc-xt5qf started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.765: INFO: slave started at 2022-12-02 09:19:22 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container cntr ready: true, restart count 0 Dec 2 09:20:46.765: INFO: ss-0 started at 2022-12-02 09:19:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:46.765: INFO: simpletest.rc-s98w8 started at 2022-12-02 09:18:31 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.765: INFO: kube-proxy-ip-172-20-49-67.ap-southeast-1.compute.internal started at 2022-12-02 09:01:59 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:46.765: INFO: simpletest.rc-q75ts started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.765: INFO: simpletest.rc-sdlx6 started at 2022-12-02 09:18:30 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.765: INFO: simpletest.rc-vjkr4 started at 2022-12-02 09:18:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.765: INFO: simpletest.rc-qfccr started at 2022-12-02 09:18:33 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container nginx ready: true, restart count 0 Dec 2 09:20:46.765: INFO: ss2-0 started at 2022-12-02 09:19:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:46.765: INFO: oidc-discovery-validator started at 2022-12-02 09:19:03 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:46.765: INFO: Container oidc-discovery-validator ready: false, restart count 0 Dec 2 09:20:47.967: INFO: Latency metrics for node ip-172-20-49-67.ap-southeast-1.compute.internal Dec 2 09:20:47.968: INFO: Logging node info for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:48.214: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-55-194.ap-southeast-1.compute.internal 890854e9-f510-402d-9886-49c1d41318f4 34763 0 2022-12-02 09:00:57 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:c5.large beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:master-ap-southeast-1a kops.k8s.io/kops-controller-pki: kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-55-194.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers: node.kubernetes.io/instance-type:c5.large topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"ebs.csi.aws.com":"i-00b46fae03d775a19"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.55.194/19 projectcalico.org/IPv4IPIPTunnelAddr:100.104.201.0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-12-02 09:00:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {protokube Update v1 2022-12-02 09:01:11 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/kops-controller-pki":{},"f:node-role.kubernetes.io/control-plane":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}} } {kops-controller Update v1 2022-12-02 09:01:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/master":{}}}} } {kube-controller-manager Update v1 2022-12-02 09:01:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.0.0/24\"":{}},"f:taints":{}}} } {Go-http-client Update v1 2022-12-02 09:02:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2022-12-02 09:02:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.0.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-00b46fae03d775a19,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[100.96.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3894931456 0} {<nil>} 3803644Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3790073856 0} {<nil>} 3701244Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:02:00 +0000 UTC,LastTransitionTime:2022-12-02 09:02:00 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:00:51 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:17:47 +0000 UTC,LastTransitionTime:2022-12-02 09:01:48 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.55.194,},NodeAddress{Type:ExternalIP,Address:54.169.84.77,},NodeAddress{Type:Hostname,Address:ip-172-20-55-194.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-55-194.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-54-169-84-77.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec2521391aeba8d2805b54ac578aa7d0,SystemUUID:ec252139-1aeb-a8d2-805b-54ac578aa7d0,BootID:4e785fe8-5068-4fd6-b8b0-5a4aae03c815,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/etcdadm/etcd-manager@sha256:256a64fb44876d270f04ada1afd3ca431341f249aa52cbe2b3780f8f23961142 registry.k8s.io/etcdadm/etcd-manager:v3.0.20220727],SizeBytes:216364516,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver-amd64:v1.23.14],SizeBytes:136567243,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager-amd64:v1.23.14],SizeBytes:126380852,},ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler-amd64:v1.23.14],SizeBytes:54860595,},ContainerImage{Names:[docker.io/calico/kube-controllers@sha256:58cc91c551e9e941a752e205eefed1c8da56f97a51e054b3d341b67bb7bf27eb docker.io/calico/kube-controllers:v3.23.5],SizeBytes:53774679,},ContainerImage{Names:[registry.k8s.io/kops/kops-controller:1.24.5],SizeBytes:41269276,},ContainerImage{Names:[registry.k8s.io/kops/dns-controller:1.24.5],SizeBytes:40816784,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-provisioner@sha256:122bfb8c1edabb3c0edd63f06523e6940d958d19b3957dc7b1d6f81e9f1f6119 registry.k8s.io/sig-storage/csi-provisioner:v3.1.0],SizeBytes:23345856,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-resizer@sha256:9ebbf9f023e7b41ccee3d52afe39a89e3ddacdbb69269d583abfc25847cfd9e4 registry.k8s.io/sig-storage/csi-resizer:v1.4.0],SizeBytes:22381475,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-attacher@sha256:8b9c313c05f54fb04f8d430896f5f5904b6cb157df261501b29adc04d2b2dc7b registry.k8s.io/sig-storage/csi-attacher:v3.4.0],SizeBytes:22085298,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[registry.k8s.io/kops/kube-apiserver-healthcheck:1.24.5],SizeBytes:5130223,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Dec 2 09:20:48.215: INFO: Logging kubelet events for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:48.466: INFO: Logging pods the kubelet thinks is on node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:48.732: INFO: etcd-manager-main-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.737: INFO: Container etcd-manager ready: true, restart count 0 Dec 2 09:20:48.738: INFO: kube-apiserver-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+2 container statuses recorded) Dec 2 09:20:48.738: INFO: Container healthcheck ready: true, restart count 0 Dec 2 09:20:48.738: INFO: Container kube-apiserver ready: true, restart count 1 Dec 2 09:20:48.738: INFO: kube-controller-manager-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.738: INFO: Container kube-controller-manager ready: true, restart count 2 Dec 2 09:20:48.738: INFO: kube-proxy-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.738: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:48.738: INFO: kube-scheduler-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.738: INFO: Container kube-scheduler ready: true, restart count 0 Dec 2 09:20:48.738: INFO: calico-node-xfrb9 started at 2022-12-02 09:01:32 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:48.738: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:48.740: INFO: Init container install-cni ready: true, restart count 0 Dec 2 09:20:48.740: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:48.740: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:48.740: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:48.740: INFO: kops-controller-7l85j started at 2022-12-02 09:01:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.740: INFO: Container kops-controller ready: true, restart count 0 Dec 2 09:20:48.740: INFO: etcd-manager-events-ip-172-20-55-194.ap-southeast-1.compute.internal started at 2022-12-02 09:00:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.740: INFO: Container etcd-manager ready: true, restart count 0 Dec 2 09:20:48.740: INFO: ebs-csi-controller-55c8659c7c-sqq7m started at 2022-12-02 09:01:32 +0000 UTC (0+5 container statuses recorded) Dec 2 09:20:48.740: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:20:48.740: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:48.740: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:20:48.740: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:48.740: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:48.740: INFO: dns-controller-847484c97f-z8rs4 started at 2022-12-02 09:01:32 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.740: INFO: Container dns-controller ready: true, restart count 0 Dec 2 09:20:48.740: INFO: calico-kube-controllers-795c657547-9mz5t started at 2022-12-02 09:01:48 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:48.740: INFO: Container calico-kube-controllers ready: true, restart count 0 Dec 2 09:20:48.740: INFO: ebs-csi-node-rfwfq started at 2022-12-02 09:01:32 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:48.740: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:48.740: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:48.740: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:50.265: INFO: Latency metrics for node ip-172-20-55-194.ap-southeast-1.compute.internal Dec 2 09:20:50.291: INFO: Logging node info for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:50.765: INFO: Node Info: &Node{ObjectMeta:{ip-172-20-60-164.ap-southeast-1.compute.internal 4d06e01c-27c4-4c2f-b118-647413c7ddf6 40537 0 2022-12-02 09:02:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:t3.medium beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:ap-southeast-1 failure-domain.beta.kubernetes.io/zone:ap-southeast-1a kops.k8s.io/instancegroup:nodes-ap-southeast-1a kubernetes.io/arch:amd64 kubernetes.io/hostname:ip-172-20-60-164.ap-southeast-1.compute.internal kubernetes.io/os:linux kubernetes.io/role:node node-role.kubernetes.io/node: node.kubernetes.io/instance-type:t3.medium topology.ebs.csi.aws.com/zone:ap-southeast-1a topology.hostpath.csi/node:ip-172-20-60-164.ap-southeast-1.compute.internal topology.kubernetes.io/region:ap-southeast-1 topology.kubernetes.io/zone:ap-southeast-1a] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-9857":"ip-172-20-60-164.ap-southeast-1.compute.internal","ebs.csi.aws.com":"i-0a7cd257efff997b0"} node.alpha.kubernetes.io/ttl:0 projectcalico.org/IPv4Address:172.20.60.164/19 projectcalico.org/IPv4IPIPTunnelAddr:100.106.61.128 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kops-controller Update v1 2022-12-02 09:02:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:kops.k8s.io/instancegroup":{},"f:kubernetes.io/role":{},"f:node-role.kubernetes.io/node":{}}}} } {kubelet Update v1 2022-12-02 09:02:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-12-02 09:02:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"100.96.2.0/24\"":{}}}} } {Go-http-client Update v1 2022-12-02 09:03:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:projectcalico.org/IPv4Address":{},"f:projectcalico.org/IPv4IPIPTunnelAddr":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-12-02 09:17:54 +0000 UTC FieldsV1 {"f:status":{"f:volumesAttached":{}}} status} {kubelet Update v1 2022-12-02 09:17:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:csi.volume.kubernetes.io/nodeid":{}},"f:labels":{"f:topology.ebs.csi.aws.com/zone":{},"f:topology.hostpath.csi/node":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:volumesInUse":{}}} status}]},Spec:NodeSpec{PodCIDR:100.96.2.0/24,DoNotUseExternalID:,ProviderID:aws:///ap-southeast-1a/i-0a7cd257efff997b0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[100.96.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{47441653760 0} {<nil>} 46329740Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{4054310912 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{42697488314 0} {<nil>} 42697488314 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3949453312 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-12-02 09:03:11 +0000 UTC,LastTransitionTime:2022-12-02 09:03:11 +0000 UTC,Reason:CalicoIsUp,Message:Calico is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:38 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:38 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-12-02 09:19:38 +0000 UTC,LastTransitionTime:2022-12-02 09:01:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-12-02 09:19:38 +0000 UTC,LastTransitionTime:2022-12-02 09:02:56 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.20.60.164,},NodeAddress{Type:ExternalIP,Address:13.212.105.239,},NodeAddress{Type:Hostname,Address:ip-172-20-60-164.ap-southeast-1.compute.internal,},NodeAddress{Type:InternalDNS,Address:ip-172-20-60-164.ap-southeast-1.compute.internal,},NodeAddress{Type:ExternalDNS,Address:ec2-13-212-105-239.ap-southeast-1.compute.amazonaws.com,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ec28ab9d0d1126900acfd3b82032bd9b,SystemUUID:ec28ab9d-0d11-2690-0acf-d3b82032bd9b,BootID:925eb9d6-3c66-49ad-be43-0411968ca10c,KernelVersion:5.15.79-flatcar,OSImage:Flatcar Container Linux by Kinvolk 3417.1.0 (Oklo),ContainerRuntimeVersion:containerd://1.6.9,KubeletVersion:v1.23.14,KubeProxyVersion:v1.23.14,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.14],SizeBytes:114239543,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:11e6a66017ba4e4b938c1612b7a54a3befcefd354796c04e1dba76873a13518e k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.5],SizeBytes:112030526,},ContainerImage{Names:[docker.io/calico/cni@sha256:7ca5c455cff6c0d661e33918d95a1133afb450411dbfb7e4369a9ecf5e0212dc docker.io/calico/cni:v3.23.5],SizeBytes:107998578,},ContainerImage{Names:[docker.io/calico/node@sha256:b7f4f7a0ce463de5d294fdf2bb13f61035ec6e3e5ee05dd61dcc8e79bc29d934 docker.io/calico/node:v3.23.5],SizeBytes:75105675,},ContainerImage{Names:[docker.io/library/nginx@sha256:e209ac2f37c70c1e0e9873a5f7231e91dcd83fdf1178d8ed36c2ec09974210ba docker.io/library/nginx:latest],SizeBytes:56833911,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e k8s.gcr.io/e2e-test-images/agnhost:2.39],SizeBytes:51105200,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[registry.k8s.io/provider-aws/aws-ebs-csi-driver@sha256:2727c4ba96b420f6280107daaf4a40a5de5f7241a1b70052056a5016dff05b2f registry.k8s.io/provider-aws/aws-ebs-csi-driver:v1.8.0],SizeBytes:25940355,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:4e74c0492bceddc598de1c90cc5bc14dcda94cb49fa9c5bad9d117c4834b5e08 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.1],SizeBytes:22631062,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-provisioner@sha256:c8e03f60afa90a28e4bb6ec9a8d0fc36d89de4b7475cf2d613afa793ec969fe0 k8s.gcr.io/sig-storage/csi-provisioner:v2.2.0],SizeBytes:22629806,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:36c31f7e1f433c9634d24f876353e8646246d81a03c4e351202c2644daff1620 k8s.gcr.io/sig-storage/csi-resizer:v1.2.0],SizeBytes:21584611,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:c5be65d6679efabb969d9b019300d187437ae876f992c40911fd2892bbef3b36 k8s.gcr.io/sig-storage/csi-attacher:v3.2.0],SizeBytes:21367429,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-attacher@sha256:60ab9b3e6a030d3038c87c0d6bca2930f58d1d72823e6a4af09767dc83b696a2 k8s.gcr.io/sig-storage/csi-attacher:v3.2.1],SizeBytes:21366448,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-snapshotter@sha256:ed98431376c9e944e19a465fe8ea944806714dd95416a0821096c78d66b579bd k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1],SizeBytes:21331336,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-resizer@sha256:7a5ba58a44e0d749e0767e4e37315bcf6a61f33ce3185c1991848af4db0fb70a k8s.gcr.io/sig-storage/csi-resizer:v1.1.0],SizeBytes:20096832,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonroot@sha256:b9e2958a3dd879e3cf11142228c6d073d0fc4ea2e857c3be6f4fb0ab5fb2c937 k8s.gcr.io/e2e-test-images/nonroot:1.2],SizeBytes:17748301,},ContainerImage{Names:[k8s.gcr.io/sig-storage/hostpathplugin@sha256:232fe80174d60d520d36043103853a1d7ab4b7f3782cf43e45034f04ccda58ce k8s.gcr.io/sig-storage/hostpathplugin:v1.7.1],SizeBytes:14930811,},ContainerImage{Names:[registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:0103eee7c35e3e0b5cd8cdca9850dc71c793cdeb6669d8be7a89440da2d06ae4 registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.5.1],SizeBytes:9133109,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:a61d309da54641db41fb8f35718f744e9f730d4d0384f8c4b186ddc9f06cbd5f k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0],SizeBytes:9068367,},ContainerImage{Names:[k8s.gcr.io/sig-storage/csi-node-driver-registrar@sha256:2dee3fe5fe861bb66c3a4ac51114f3447a4cd35870e0f2e2b558c7a400d89589 k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0],SizeBytes:8561694,},ContainerImage{Names:[k8s.gcr.io/sig-storage/mock-driver@sha256:a7b517f9e0f42ffade802eef9cefa271372386b85f55b702b493241e58459793 k8s.gcr.io/sig-storage/mock-driver:v4.1.0],SizeBytes:8223849,},ContainerImage{Names:[registry.k8s.io/sig-storage/livenessprobe@sha256:44d8275b3f145bc290fd57cb00de2d713b5e72d2e827d8c5555f8ddb40bf3f02 registry.k8s.io/sig-storage/livenessprobe:v2.5.0],SizeBytes:8107305,},ContainerImage{Names:[k8s.gcr.io/sig-storage/livenessprobe@sha256:529be2c9770add0cdd0c989115222ea9fc1be430c11095eb9f6dafcf98a36e2b k8s.gcr.io/sig-storage/livenessprobe:v2.4.0],SizeBytes:7960518,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:3263463,},ContainerImage{Names:[docker.io/library/busybox@sha256:59f225fdf34f28a07d22343ee415ee417f6b8365cf4a0d3a2933cbd8fd7cf8c1 docker.io/library/busybox:latest],SizeBytes:777278,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:732746,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6 registry.k8s.io/pause:3.6],SizeBytes:301773,},},VolumesInUse:[kubernetes.io/csi/ebs.csi.aws.com^vol-0413aef78391fa3e6],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/ebs.csi.aws.com^vol-0413aef78391fa3e6,DevicePath:,},},Config:nil,},} Dec 2 09:20:50.985: INFO: Logging kubelet events for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:51.478: INFO: Logging pods the kubelet thinks is on node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:20:54.639: INFO: ebs-csi-node-lrwc5 started at 2022-12-02 09:02:06 +0000 UTC (0+3 container statuses recorded) Dec 2 09:20:54.676: INFO: Container ebs-plugin ready: true, restart count 0 Dec 2 09:20:54.680: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:54.680: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:54.680: INFO: external-client started at 2022-12-02 09:19:27 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:54.689: INFO: Container external-client ready: true, restart count 0 Dec 2 09:20:54.689: INFO: hostexec-ip-172-20-60-164.ap-southeast-1.compute.internal-qrptd started at 2022-12-02 09:20:43 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:54.689: INFO: Container agnhost-container ready: true, restart count 0 Dec 2 09:20:54.689: INFO: externalsvc-kc489 started at 2022-12-02 09:19:21 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:54.689: INFO: Container externalsvc ready: true, restart count 0 Dec 2 09:20:54.689: INFO: csi-hostpathplugin-0 started at 2022-12-02 09:17:33 +0000 UTC (0+7 container statuses recorded) Dec 2 09:20:54.689: INFO: Container csi-attacher ready: true, restart count 0 Dec 2 09:20:54.689: INFO: Container csi-provisioner ready: true, restart count 0 Dec 2 09:20:54.689: INFO: Container csi-resizer ready: true, restart count 0 Dec 2 09:20:54.689: INFO: Container csi-snapshotter ready: true, restart count 0 Dec 2 09:20:54.689: INFO: Container hostpath ready: true, restart count 0 Dec 2 09:20:54.689: INFO: Container liveness-probe ready: true, restart count 0 Dec 2 09:20:54.689: INFO: Container node-driver-registrar ready: true, restart count 0 Dec 2 09:20:54.689: INFO: kube-proxy-ip-172-20-60-164.ap-southeast-1.compute.internal started at 2022-12-02 09:01:55 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:54.689: INFO: Container kube-proxy ready: true, restart count 0 Dec 2 09:20:54.689: INFO: calico-node-gv4lf started at 2022-12-02 09:02:06 +0000 UTC (4+1 container statuses recorded) Dec 2 09:20:54.689: INFO: Init container upgrade-ipam ready: true, restart count 0 Dec 2 09:20:54.689: INFO: Init container install-cni ready: true, restart count 1 Dec 2 09:20:54.689: INFO: Init container mount-bpffs ready: true, restart count 0 Dec 2 09:20:54.689: INFO: Init container flexvol-driver ready: true, restart count 0 Dec 2 09:20:54.689: INFO: Container calico-node ready: true, restart count 0 Dec 2 09:20:54.689: INFO: ss2-1 started at 2022-12-02 09:19:19 +0000 UTC (0+1 container statuses recorded) Dec 2 09:20:54.690: INFO: Container webserver ready: true, restart count 0 Dec 2 09:20:54.690: INFO: pod-terminate-status-2-14 started at 2022-12-02 09:19:29 +0000 UTC (1+1 container statuses recorded) Dec 2 09:20:54.690: INFO: Init container fail ready: false, restart count 0 Dec 2 09:20:54.690: INFO: Container blocked ready: false, restart count 0 Dec 2 09:21:00.653: INFO: Latency metrics for node ip-172-20-60-164.ap-southeast-1.compute.internal Dec 2 09:21:00.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-3005" for this suite.
Filter through log files
exit status 255
from junit_runner.xml
Filter through log files
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can be classified by adding FlowSchema and PriorityLevelConfiguration
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create v1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with cross namespace pod affinity scope using scope-selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [sig-api-machinery] Server request timeout default timeout should be used if the specified timeout in the request URL is 0s
Kubernetes e2e suite [sig-api-machinery] Server request timeout should return HTTP status code 400 if the user specifies an invalid timeout in the request URL
Kubernetes e2e suite [sig-api-machinery] Server request timeout the request should be served with a default timeout if the specified timeout in the request URL exceeds maximum allowed
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should create an applied object if it does not already exist
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should give up ownership of a field if forced applied by a controller
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should ignore conflict errors if force apply is used
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should not remove a field if an owner unsets the field but other managers still have ownership of the field
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should remove a field if it is owned but removed in the apply request
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should work for CRDs
Kubernetes e2e suite [sig-api-machinery] ServerSideApply should work for subresources
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/json,application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf"
Kubernetes e2e suite [sig-api-machinery] client-go should negotiate watch and report errors with accept "application/vnd.kubernetes.protobuf,application/json"
Kubernetes e2e suite [sig-api-machinery] health handlers should contain necessary checks
Kubernetes e2e suite [sig-api-machinery] server version should find the server version [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should be able to schedule after more than 100 missed schedule
Kubernetes e2e suite [sig-apps] CronJob should delete failed finished jobs with limit of one job
Kubernetes e2e suite [sig-apps] CronJob should delete successful finished jobs with limit of one successful job
Kubernetes e2e suite [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should support CronJob API operations [Conformance]
Kubernetes e2e suite [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [sig-apps] Deployment should not disrupt a cloud load-balancer's connectivity during rollout
Kubernetes e2e suite [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]
Kubernetes e2e suite [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]
Kubernetes e2e suite [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]
Kubernetes e2e suite [sig-apps] DisruptionController should observe that the PodDisruptionBudget status is not updated for unmanaged pods
Kubernetes e2e suite [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]
Kubernetes e2e suite [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname
Kubernetes e2e suite [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [sig-apps] Job should delete pods when suspended
Kubernetes e2e suite [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [sig-apps] Job should not create pods when created in suspend state
Kubernetes e2e suite [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [sig-apps] ReplicaSet Replace and Patch tests [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet MinReadySeconds should be honored when enabled
Kubernetes e2e suite [sig-apps] TTLAfterFinished job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]
Kubernetes e2e suite [sig-auth] Certificates API [Privileged:ClusterAdmin] should support building a client with a CSR
Kubernetes e2e suite [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should ensure a single API token exists
Kubernetes e2e suite [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should mount projected service account token [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should set ownership and permission when RunAsUser or FsGroup is present [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for cronjob
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should handle in-cluster config
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a failing command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes execing into a container with a successful command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes running a failing command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes running a successful command
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support port-forward
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client kubectl wait should ignore not found error with --for=delete
Kubernetes e2e suite [sig-instrumentation] Events API should delete a collection of events [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events should delete a collection of events [Conformance]
Kubernetes e2e suite [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when initial unready endpoints get ready
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a ClusterIP service
Kubernetes e2e suite [sig-network] Conntrack should be able to preserve UDP traffic when server pod cycles for a NodePort service
Kubernetes e2e suite [sig-network] Conntrack should drop INVALID conntrack entries [Privileged]
Kubernetes e2e suite [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [sig-network] DNS should support configurable pod DNS nameservers [Conformance]
Kubernetes e2e suite [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]
Kubernetes e2e suite [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]
Kubernetes e2e suite [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]
Kubernetes e2e suite [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Ingress API should support creating Ingress API operations [Conformance]
Kubernetes e2e suite [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]
Kubernetes e2e suite [sig-network] KubeProxy should set TCP CLOSE_WAIT timeout [Privileged]
Kubernetes e2e suite [sig-network] Netpol API should support creating NetworkPolicy API operations
Kubernetes e2e suite [sig-network] NetworkPolicy API should support creating NetworkPolicy API operations
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy
Kubernetes e2e suite [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node using proxy subresource
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [sig-network] SCTP [LinuxOnly] should allow creating a basic SCTP service with pod and endpoints
Kubernetes e2e suite [sig-network] SCTP [LinuxOnly] should create a ClusterIP Service with SCTP ports
Kubernetes e2e suite [sig-network] SCTP [LinuxOnly] should create a Pod with SCTP HostPort
Kubernetes e2e suite [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is true
Kubernetes e2e suite [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to up and down services
Kubernetes e2e suite [sig-network] Services should be able to update service type to NodePort listening on same port number but different protocols
Kubernetes e2e suite [sig-network] Services should be possible to connect to a service via ExternalIP when the external IP is not assigned to a node
Kubernetes e2e suite [sig-network] Services should be rejected for evicted pods (no endpoints exist)
Kubernetes e2e suite [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [sig-network] Services should complete a service status lifecycle [Conformance]
Kubernetes e2e suite [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [sig-network] Services should delete a collection of services [Conformance]
Kubernetes e2e suite [sig-network] Services should find a service from listing all namespaces [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [sig-network] Services should not be able to connect to terminating and unready endpoints if PublishNotReadyAddresses is false
Kubernetes e2e suite [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should test the lifecycle of an Endpoint [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [Excluded:WindowsDocker] [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Ephemeral Containers [NodeFeature:EphemeralContainers] will start an ephemeral container in an existing pod
Kubernetes e2e suite [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [sig-node] Mount propagation should propagate mounts within defined scopes
Kubernetes e2e suite [sig-node] NodeLease NodeLease should have OwnerReferences set
Kubernetes e2e suite [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [sig-node] Pods Extended Delete Grace Period should be submitted and removed
Kubernetes e2e suite [sig-node] Pods Extended Pod Container Status should never report success for a pending container
Kubernetes e2e suite [sig-node] Pods Extended Pod Container lifecycle evicted pods should be terminal
Kubernetes e2e suite [sig-node] Pods Extended Pod Container lifecycle should not create extra sandbox if all containers are done
Kubernetes e2e suite [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
Kubernetes e2e suite [sig-node] Pods should support pod readiness gates [NodeConformance]
Kubernetes e2e suite [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
Kubernetes e2e suite [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
Kubernetes e2e suite [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp default which is unconfined [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp runtime/default [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp unconfined on the container [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support seccomp unconfined on the pod [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [sig-node] kubelet Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (late-binding)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should verify that all csinodes have volume limits
Kubernetes e2e suite [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=File
Kubernetes e2e suite [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should modify fsGroup if fsGroupPolicy=default
Kubernetes e2e suite [sig-storage] CSI mock volume CSI FSGroupPolicy [LinuxOnly] should not modify fsGroup if fsGroupPolicy=None
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for ephemermal volume and drivers with attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [sig-storage] CSI mock volume CSIServiceAccountToken token should be plumbed down when csiServiceAccountTokenEnabled=true
Kubernetes e2e suite [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when CSIDriver is not deployed
Kubernetes e2e suite [sig-storage] CSI mock volume CSIServiceAccountToken token should not be plumbed down when csiServiceAccountTokenEnabled=false
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity disabled
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity unused
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, have capacity
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, insufficient capacity
Kubernetes e2e suite [sig-storage] CSI mock volume CSIStorageCapacity CSIStorageCapacity used, no capacity
Kubernetes e2e suite [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should not pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [sig-storage] CSI mock volume Delegate FSGroup to CSI driver [LinuxOnly] should pass FSGroup to CSI driver if it is set in pod and driver supports VOLUME_MOUNT_GROUP
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity exhausted, immediate binding
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity exhausted, late binding, no topology
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity exhausted, late binding, with topology
Kubernetes e2e suite [sig-storage] CSI mock volume storage capacity unlimited
Kubernetes e2e suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV
Kubernetes e2e suite [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
Kubernetes e2e suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
Kubernetes e2e suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] PV Protection Verify "immediate" deletion of a PV that is not bound to a PVC
Kubernetes e2e suite [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately
Kubernetes e2e suite [sig-storage] PVC Protection Verify "immediate" deletion of a PVC that is not in active use by a pod
Kubernetes e2e suite [sig-storage] PVC Protection Verify that PVC in active use by a pod is not removed immediately
Kubernetes e2e suite [sig-storage] PVC Protection Verify that scheduling of a pod that uses PVC that is being deleted fails and the pod becomes Unschedulable
Kubernetes e2e suite [sig-storage] PersistentVolumes-expansion loopback local block volume should support online expansion on node
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Container restart should verify that container can restart successfully after configmaps modified
Kubernetes e2e suite [sig-storage] Volumes ConfigMap should be mountable
kubetest2 Down
kubetest2 Up
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: CSI Ephemeral-volume (default fs)] ephemeral should support two pods which have the same volume definition
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite External Storage [Driver: ebs.csi.aws.com] [Testpattern: Pre-provisioned Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (fairness)
Kubernetes e2e suite [sig-api-machinery] API priority and fairness should ensure that requests can't be drowned out (priority)
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] [Flaky] kubectl explain works for CR with the same resource name as built-in object.
Kubernetes e2e suite [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin][Alpha][Feature:CustomResourceValidationExpressions] MUST NOT fail validation for create of a custom resource that satisfies the x-kubernetes-validator rules
Kubernetes e2e suite [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin][Alpha][Feature:CustomResourceValidationExpressions] MUST fail create of a custom resource definition that contains a x-kubernetes-validator rule that refers to a property that do not exist
Kubernetes e2e suite [sig-api-machinery] CustomResourceValidationRules [Privileged:ClusterAdmin][Alpha][Feature:CustomResourceValidationExpressions] MUST fail validation for create of a custom resource that does not satisfy the x-kubernetes-validator rules
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [sig-api-machinery] StorageVersion resources [Feature:StorageVersionAPI] storage version with non-existing id should be GC'ed
Kubernetes e2e suite [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should surge pods onto nodes when spec was updated and update strategy is RollingUpdate
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kube-proxy should recover after being killed accidentally
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction [Serial]
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction [Serial]
Kubernetes e2e suite [sig-apps] Job should run a job to completion with CPU requests [Serial]
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [sig-apps] StatefulSet Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenDeleted)
Kubernetes e2e suite [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs after adopting pod (WhenScaled) [Feature:StatefulSetAutoDeletePVC]
Kubernetes e2e suite [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a OnScaledown policy
Kubernetes e2e suite [sig-apps] StatefulSet Non-retain StatefulSetPersistentVolumeClaimPolicy [Feature:StatefulSetAutoDeletePVC] should delete PVCs with a WhenDeleted policy
Kubernetes e2e suite [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [sig-auth] Metadata Concealment should run a check-metadata-concealment job to completion
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should allow pods under the privileged policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should enforce the restricted policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy [Feature:PodSecurityPolicy] should forbid pod creation when no PSP is available
Kubernetes e2e suite [sig-auth] ServiceAccount admission controller migration [Feature:BoundServiceAccountTokenVolume] master upgrade should maintain a functioning cluster
Kubernetes e2e suite [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow]
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) ReplicationController light Should scale from 2 pods to 1 pod [Slow]
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should not scale up on a busy sidecar with an idle application
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicaSet with idle sidecar (ContainerResource use case) Should scale from 1 pod to 3 pods and from 3 to 5 on a busy application with an idle sidecar container
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Object from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver with Prometheus [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target average value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two External metrics from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two metrics of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command with --leave-stdin-open
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes [Slow] running a failing command without --restart=Never, but with --rm
Kubernetes e2e suite [sig-cloud-provider-gcp] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [sig-cloud-provider-gcp] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [sig-cloud-provider-gcp] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [sig-cloud-provider-gcp] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [sig-cloud-provider-gcp] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [sig-cloud-provider-gcp] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [sig-network] DNS configMap nameserver Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [sig-network] Feature:Topology Hints should distribute endpoints evenly
Kubernetes e2e suite [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [sig-network] Firewall rule control plane should not expose well-known ports
Kubernetes e2e suite [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should allow IngressClass to have Namespace-scoped parameters [Serial]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should not set default value if no default IngressClass [Serial]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should prevent Ingress creation if more than 1 IngressClass marked as default [Serial]
Kubernetes e2e suite [sig-network] IngressClass [Feature:Ingress] should set default value on new IngressClass [Serial]
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should only target nodes with endpoints
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work for type=LoadBalancer
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work for type=NodePort
Kubernetes e2e suite [sig-network] LoadBalancers ESIPP [Slow] should work from pods
Kubernetes e2e suite [sig-network] LoadBalancers should be able to change the type and ports of a TCP service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to change the type and ports of a UDP service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to create an internal type load balancer [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should have session affinity work for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters] [LinuxOnly]
Kubernetes e2e suite [sig-network] LoadBalancers should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [sig-network] LoadBalancers should reconcile LB health check interval [Slow][Serial][Disruptive]
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to create a ClusterIP service
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should be able to switch between IG and NEG modes
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] [Flaky] should sync endpoints to NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [sig-network] Loadbalancing: L7 [Slow] Nginx should conform to Ingress spec
Kubernetes e2e suite [sig-network] Netpol API should support creating NetworkPolicy API with endport field [Feature:NetworkPolicyEndPort]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny egress from all pods in a namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny egress from pods based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should deny ingress from pods on other namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce ingress policy allowing any port traffic to a server on a specific protocol [Feature:NetworkPolicy] [Feature:UDP]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Multiple PodSelectors and NamespaceSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy based on any PodSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic for a target [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow ingress traffic from pods in all namespaces [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic based on NamespaceSelector with MatchLabels using default ns label [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should not allow access by TCP when a policy specifies only UDP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should not mistakenly treat 'protocol: SCTP' as 'protocol: TCP', even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should properly isolate pods that are selected by a policy allowing SCTP, even if the plugin doesn't support SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should support denying of egress traffic on the client side (even if the server explicitly allows this traffic) [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol NetworkPolicy between server and client should work with Ingress, Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Netpol [LinuxOnly] NetworkPolicy between server and client using UDP should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [Feature:SCTPConnectivity][LinuxOnly][Disruptive] NetworkPolicy between server and client using SCTP should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should deny ingress access to updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce except clause while egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should ensure an IP overlapping both IPBlock.CIDR and IPBlock.Except is allowed [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should not allow access by TCP when a policy specifies only SCTP [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-all' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny-ingress' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicyLegacy [LinuxOnly] NetworkPolicy between server and client should work with Ingress,Egress specified together [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should be able to handle large requests: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for multiple endpoint-Services with same selector
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for service endpoints using hostNetwork
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should support basic nodePort: udp functionality
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [sig-network] Networking IPerf2 [Feature:Networking-Performance] should run iperf2
Kubernetes e2e suite [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [sig-network] Networking should provider Internet connection for containers using DNS [Feature:Networking-DNS]
Kubernetes e2e suite [sig-network] Networking should recreate its iptables rules if they are deleted [Disruptive]
Kubernetes e2e suite [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [sig-network] Services GCE [Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod (hostNetwork: true) to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod and Node, to Pod (hostNetwork: true) [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should respect internalTrafficPolicy=Local Pod to Pod [Feature:ServiceInternalTrafficPolicy]
Kubernetes e2e suite [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should be able to handle large requests: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: http [LinuxOnly]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for client IP based session affinity: udp [LinuxOnly]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for endpoint-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for node-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: sctp [Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for pod-Service: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should function for service endpoints using hostNetwork
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: http
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] Granular Checks: Services Secondary IP Family [LinuxOnly] should update endpoints: udp
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should be able to reach pod on ipv4 and ipv6 ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create a single stack service with cluster ip from primary service range
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv4 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv4,v6 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv6 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should create service with ipv6,v4 cluster ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStack] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [sig-network] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [sig-node] Kubelet [Serial] [Slow] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]
Kubernetes e2e suite [sig-node] NodeProblemDetector should run without error
Kubernetes e2e suite [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeAlphaFeature:GRPCContainerProbe][Feature:GRPCContainerProbe]
Kubernetes e2e suite [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeAlphaFeature:GRPCContainerProbe][Feature:GRPCContainerProbe]
Kubernetes e2e suite [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling with taints [Serial]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling without taints
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
Kubernetes e2e suite [sig-node] [Feature:Example] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [sig-node] [Feature:Example] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [sig-node] [Feature:Example] Secret should create a pod that reads a secret
Kubernetes e2e suite [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [sig-node] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [sig-node] kubelet host cleanup with volume mounts [HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones [Serial]
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones [Serial]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] PodTopologySpread Filtering validates 4 pods with MaxSkew=1 are evenly distributed into 2 nodes
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates pod overhead is considered along with resource limits of pods that are allowed to run verify pod overhead is accounted for
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PodTopologySpread Preemption validates proper pods are preempted
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] PodTopologySpread Scoring validates pod should be preferably scheduled to node which makes the matching pods more evenly distributed
Kubernetes e2e suite [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (Always)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, new pod fsgroup applied to volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with different fsgroup applied to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] fsgroupchangepolicy (OnRootMismatch)[LinuxOnly], pod created with an initial fsgroup, volume contents ownership changed via chgrp in first pod, new pod with same fsgroup skips ownership changes to the volume contents
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-stress multiple pods should access different volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volume-lifecycle-performance should provision volumes at scale within performance constraints [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)(allowExpansion)][Feature:Windows] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with pvc data source in parallel [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable-stress[Feature:VolumeSnapshotDataSource] should support snapshotting of many volumes repeatedly [Slow] [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (delete policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works after modifying source data, check deletion (persistent)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Ephemeral Snapshot (retain policy)] snapshottable[Feature:VolumeSnapshotDataSource] volume snapshot controller should check snapshot fields, check restore correctly works, check deletion (ephemeral)
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs) (immediate-binding)] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Generic Ephemeral-volume (default fs)] volumeLimits should support volume limits [Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][Feature:Windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv used in a pod that is force deleted while the kubelet is down cleans up when the kubelet returns.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive][LinuxOnly] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single read-only volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and its clone from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the volume and restored snapshot from pods on the same node [LinuxOnly][Feature:VolumeSnapshotDataSource][Feature:VolumeSourceXFS]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]