go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] wait for service account "default" in namespace "reboot-4080": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 20:30:48.422from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:28:45.765 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:28:45.765 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:28:45.765 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 20:28:45.765 Jan 29 20:28:45.765: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 20:28:45.766 Jan 29 20:30:48.422: INFO: Unexpected error: <*fmt.wrapError | 0xc0028b0000>: { msg: "wait for service account \"default\" in namespace \"reboot-4080\": timed out waiting for the condition", err: <*errors.errorString | 0xc000207ca0>{ s: "timed out waiting for the condition", }, } [FAILED] wait for service account "default" in namespace "reboot-4080": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 20:30:48.422 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:30:48.422 (2m2.657s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 20:30:48.422 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 20:30:48.422 Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-7226v to bootstrap-e2e-minion-group-tq0k Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.041440764s (1.04151063s including waiting) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-7226v_kube-system(50c9e097-5b0f-4df8-906b-d031ff7e5d85) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": dial tcp 10.64.2.6:8181: connect: connection refused Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-7226v Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-7226v Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-7226v_kube-system(50c9e097-5b0f-4df8-906b-d031ff7e5d85) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: Get "http://10.64.2.16:8181/ready": dial tcp 10.64.2.16:8181: connect: connection refused Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: Get "http://10.64.2.18:8181/ready": dial tcp 10.64.2.18:8181: connect: connection refused Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-dfbff to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.662889684s (2.662899979s including waiting) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "http://10.64.3.5:8181/ready": dial tcp 10.64.3.5:8181: connect: connection refused Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-dfbff Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "http://10.64.3.15:8181/ready": dial tcp 10.64.3.15:8181: connect: connection refused Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-dfbff_kube-system(56a8d266-9fa7-4aaf-b9dd-ddc06dee7b8e) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "http://10.64.3.22:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-dfbff Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-dfbff_kube-system(56a8d266-9fa7-4aaf-b9dd-ddc06dee7b8e) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-dfbff Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-7226v Jan 29 20:30:48.633: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 20:30:48.633: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_90a62 became leader Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_56f11 became leader Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_8ba66 became leader Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ab7d9 became leader Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_fbf7 became leader Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_c6e94 became leader Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_8a973 became leader Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-4nk68 to bootstrap-e2e-minion-group-tq0k Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 668.677173ms (668.692909ms including waiting) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8093/healthz": dial tcp 10.64.2.5:8093: connect: network is unreachable Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-4nk68_kube-system(9618808b-f13f-4c68-85f0-0604438645d3) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Liveness probe failed: Get "http://10.64.2.9:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-4nk68_kube-system(9618808b-f13f-4c68-85f0-0604438645d3) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-cd6h5 to bootstrap-e2e-minion-group-9w8s Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 663.587649ms (663.598454ms including waiting) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-cd6h5_kube-system(7ee8917e-685a-4438-ae1f-31d3475142e7) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-cd6h5_kube-system(7ee8917e-685a-4438-ae1f-31d3475142e7) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-wh8g5 to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.480831038s (1.480840227s including waiting) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-wh8g5_kube-system(7a8f5ba8-53f9-4149-b38f-7c10aa331632) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Liveness probe failed: Get "http://10.64.3.20:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-wh8g5 Jan 29 20:30:48.633: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-cd6h5 Jan 29 20:30:48.633: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-4nk68 Jan 29 20:30:48.633: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 20:30:48.633: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 20:30:48.633: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 20:30:48.633: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 20:30:48.633: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 20:30:48.633: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 20:30:48.633: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 20:30:48.633: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 20:30:48.633: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 20:30:48.633: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 20:30:48.633: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 20:30:48.633: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 20:30:48.633: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 20:30:48.633: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 20:30:48.633: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.633: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 20:30:48.633: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 20:30:48.633: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 20:30:48.633: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 20:30:48.633: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_5cb0b339-27fa-478a-a12b-f3e084d9ff7a became leader Jan 29 20:30:48.633: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_896d20be-ed11-4ad6-ba6f-aeff112d6cdf became leader Jan 29 20:30:48.633: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_63955417-795f-4dea-b69f-0c2330df6065 became leader Jan 29 20:30:48.633: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ae362da5-d8bb-4dee-8d15-074d6258d290 became leader Jan 29 20:30:48.633: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c3dc2798-e744-4921-97b3-0c8487d60e65 became leader Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-msh27 to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.772576572s (2.77258737s including waiting) Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-msh27 Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-msh27_kube-system(c36a8737-0bbd-47ac-8331-9bb067fda14a) Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-msh27 Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-msh27_kube-system(c36a8737-0bbd-47ac-8331-9bb067fda14a) Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-msh27 Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9w8s_kube-system(41c8500189f52bcbb0d902b75d8c693f) Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9w8s_kube-system(41c8500189f52bcbb0d902b75d8c693f) Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9w8s_kube-system(41c8500189f52bcbb0d902b75d8c693f) Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qdgj_kube-system(d91ce8a7f13c5fdfeaaa986d0982d773) Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qdgj_kube-system(d91ce8a7f13c5fdfeaaa986d0982d773) Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qdgj_kube-system(d91ce8a7f13c5fdfeaaa986d0982d773) Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-tq0k_kube-system(0bf55a39319402a64119797ff480665f) Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-tq0k_kube-system(0bf55a39319402a64119797ff480665f) Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-tq0k_kube-system(0bf55a39319402a64119797ff480665f) Jan 29 20:30:48.634: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.634: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 20:30:48.634: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 20:30:48.634: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 20:30:48.634: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 20:30:48.634: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_908ace71-8fd9-4871-8936-5aab7c5cfed3 became leader Jan 29 20:30:48.634: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_978a61b3-7079-42ae-9f59-cf7b479348e3 became leader Jan 29 20:30:48.634: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_34f033d2-db64-4ab7-af5f-d35e0c069db5 became leader Jan 29 20:30:48.634: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_33ec300a-9e85-4dd8-be41-6a41765bbb91 became leader Jan 29 20:30:48.634: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ec577213-5b4f-40af-9d4a-7d6d74c43090 became leader Jan 29 20:30:48.634: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_f3f60a97-a50c-44d4-96ac-93a72ec88765 became leader Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-wxpff to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.193873035s (1.193895245s including waiting) Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container default-http-backend Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container default-http-backend Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-wxpff Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container default-http-backend Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container default-http-backend Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Liveness probe failed: Get "http://10.64.3.14:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-wxpff Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container default-http-backend Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container default-http-backend Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-wxpff Jan 29 20:30:48.634: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 20:30:48.634: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 20:30:48.634: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 20:30:48.634: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 20:30:48.634: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 20:30:48.634: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 20:30:48.634: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5nlck to bootstrap-e2e-minion-group-9w8s Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 793.012358ms (793.412637ms including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.021376731s (2.021411909s including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-g8pvk to bootstrap-e2e-master Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 834.33593ms (834.358292ms including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.93646676s (1.936479152s including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-ggkjj to bootstrap-e2e-minion-group-tq0k Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 815.429367ms (815.447239ms including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.922350868s (1.922366582s including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-jcl2g to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 771.095227ms (771.126697ms including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.922287469s (1.92232627s including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5nlck Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-g8pvk Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-ggkjj Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-jcl2g Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-4pd7g to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.362079376s (3.362094624s including waiting) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.97442928s (2.974455307s including waiting) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "https://10.64.3.9:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Liveness probe failed: Get "https://10.64.3.9:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-4pd7g Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-4pd7g Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-t82lt to bootstrap-e2e-minion-group-9w8s Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.394960598s (1.395000082s including waiting) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.053276162s (1.053291079s including waiting) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": dial tcp 10.64.0.3:10250: connect: connection refused Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": dial tcp 10.64.0.3:10250: connect: connection refused Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.4:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-t82lt_kube-system(e9e62670-bc8e-4962-b73b-5c0a63921679) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": dial tcp 10.64.0.5:10250: connect: connection refused Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.5:10250/livez": read tcp 10.64.0.1:47518->10.64.0.5:10250: read: connection reset by peer Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-t82lt Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-t82lt Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.14:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.14:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.14:10250/livez": dial tcp 10.64.0.14:10250: connect: connection refused Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-t82lt_kube-system(e9e62670-bc8e-4962-b73b-5c0a63921679) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-t82lt_kube-system(e9e62670-bc8e-4962-b73b-5c0a63921679) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-t82lt Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.592104829s (3.592134801s including waiting) Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(29f0150a-fdb7-4357-b072-d77b38c99300) Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(29f0150a-fdb7-4357-b072-d77b38c99300) Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(29f0150a-fdb7-4357-b072-d77b38c99300) Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 20:30:48.634 (212ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 20:30:48.634 Jan 29 20:30:48.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 29 20:30:48.682: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:30:50.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:30:52.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:30:54.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:30:56.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:30:58.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:00.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:02.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:04.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:06.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:08.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:10.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:12.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:14.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:16.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:18.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:20.736: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:22.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:24.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:26.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:28.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:30.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:32.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:34.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:36.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:38.745: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:40.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:42.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:44.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:46.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:48.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:50.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:52.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:54.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:56.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:58.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:00.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:02.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:04.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:09.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:10.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:12.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:14.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:16.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:18.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:20.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:22.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:24.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:26.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:28.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:30.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:32.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:34.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:36.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:38.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:40.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:42.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:44.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:46.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:48.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:50.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:52.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:54.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:56.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:58.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:00.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:02.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:04.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:06.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:08.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:28.138: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:28.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:30.737: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:32.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:34.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:36.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:38.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:40.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:42.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:44.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:46.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:48.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:48.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 20:33:48.781 (3m0.147s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 20:33:48.781 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 20:33:48.781 STEP: Collecting events from namespace "reboot-4080". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 20:33:48.781 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 20:33:48.822 Jan 29 20:33:48.864: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 20:33:48.864: INFO: Jan 29 20:33:48.910: INFO: Logging node info for node bootstrap-e2e-master Jan 29 20:33:48.952: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master d4485f0c-f2c7-49e9-810e-6a5d0ed1fb44 4076 0 2023-01-29 19:58:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 19:58:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 19:59:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 19:59:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 20:30:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-1-3/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 19:59:08 +0000 UTC,LastTransitionTime:2023-01-29 19:59:08 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 20:30:28 +0000 UTC,LastTransitionTime:2023-01-29 19:58:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 20:30:28 +0000 UTC,LastTransitionTime:2023-01-29 19:58:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 20:30:28 +0000 UTC,LastTransitionTime:2023-01-29 19:58:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 20:30:28 +0000 UTC,LastTransitionTime:2023-01-29 19:59:12 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.160.196,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gce-1-3.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gce-1-3.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:309bd0260e97f67bc6fae1aeb8b97dc4,SystemUUID:309bd026-0e97-f67b-c6fa-e1aeb8b97dc4,BootID:e92b9e65-25ee-4c70-8707-35ebd1478373,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 20:33:48.953: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 20:33:49.013: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 20:33:49.078: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 19:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container kube-controller-manager ready: false, restart count 7 Jan 29 20:33:49.078: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 19:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container etcd-container ready: true, restart count 4 Jan 29 20:33:49.078: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 19:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 29 20:33:49.078: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 19:58:24 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 29 20:33:49.078: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 19:58:24 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container l7-lb-controller ready: false, restart count 11 Jan 29 20:33:49.078: INFO: metadata-proxy-v0.1-g8pvk started at 2023-01-29 19:59:22 +0000 UTC (0+2 container statuses recorded) Jan 29 20:33:49.078: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 20:33:49.078: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 20:33:49.078: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 19:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container kube-apiserver ready: true, restart count 3 Jan 29 20:33:49.078: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 19:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container etcd-container ready: true, restart count 4 Jan 29 20:33:49.078: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 19:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container kube-scheduler ready: false, restart count 6 Jan 29 20:33:49.276: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 20:33:49.276: INFO: Logging node info for node bootstrap-e2e-minion-group-9w8s Jan 29 20:33:49.320: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9w8s 7d4d6910-cd1a-4f88-b377-ba2e361d6f58 4181 0 2023-01-29 19:58:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9w8s kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 19:58:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 20:15:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 20:16:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 20:31:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 20:31:51 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-1-3/us-west1-b/bootstrap-e2e-minion-group-9w8s,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 19:59:08 +0000 UTC,LastTransitionTime:2023-01-29 19:59:08 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 20:31:51 +0000 UTC,LastTransitionTime:2023-01-29 20:16:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 20:31:51 +0000 UTC,LastTransitionTime:2023-01-29 20:16:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 20:31:51 +0000 UTC,LastTransitionTime:2023-01-29 20:16:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 20:31:51 +0000 UTC,LastTransitionTime:2023-01-29 20:16:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.233.143.195,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9w8s.c.k8s-jkns-gce-1-3.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9w8s.c.k8s-jkns-gce-1-3.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8a9e577e91a0c299e671ec979c645e19,SystemUUID:8a9e577e-91a0-c299-e671-ec979c645e19,BootID:8214d8ec-eacd-4ca8-88ac-0eea06d64bd8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 20:33:49.320: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-9w8s Jan 29 20:33:49.368: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9w8s Jan 29 20:33:49.435: INFO: kube-proxy-bootstrap-e2e-minion-group-9w8s started at 2023-01-29 19:58:51 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.435: INFO: Container kube-proxy ready: true, restart count 8 Jan 29 20:33:49.435: INFO: metadata-proxy-v0.1-5nlck started at 2023-01-29 19:58:52 +0000 UTC (0+2 container statuses recorded) Jan 29 20:33:49.435: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 20:33:49.435: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 20:33:49.435: INFO: konnectivity-agent-cd6h5 started at 2023-01-29 19:59:09 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.435: INFO: Container konnectivity-agent ready: false, restart count 10 Jan 29 20:33:49.435: INFO: metrics-server-v0.5.2-867b8754b9-t82lt started at 2023-01-29 20:00:19 +0000 UTC (0+2 container statuses recorded) Jan 29 20:33:49.435: INFO: Container metrics-server ready: false, restart count 12 Jan 29 20:33:49.435: INFO: Container metrics-server-nanny ready: false, restart count 12 Jan 29 20:33:49.617: INFO: Latency metrics for node bootstrap-e2e-minion-group-9w8s Jan 29 20:33:49.617: INFO: Logging node info for node bootstrap-e2e-minion-group-qdgj Jan 29 20:33:49.660: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-qdgj 7e95d761-d26d-4801-b6e9-33b590a2c2d6 4161 0 2023-01-29 19:58:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-qdgj kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 19:58:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 20:20:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 20:21:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {node-problem-detector Update v1 2023-01-29 20:31:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 20:31:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-1-3/us-west1-b/bootstrap-e2e-minion-group-qdgj,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2023-01-29 20:20:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 19:59:08 +0000 UTC,LastTransitionTime:2023-01-29 19:59:08 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 20:31:38 +0000 UTC,LastTransitionTime:2023-01-29 20:21:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 20:31:38 +0000 UTC,LastTransitionTime:2023-01-29 20:21:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 20:31:38 +0000 UTC,LastTransitionTime:2023-01-29 20:21:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 20:31:38 +0000 UTC,LastTransitionTime:2023-01-29 20:21:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.112.91,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-qdgj.c.k8s-jkns-gce-1-3.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-qdgj.c.k8s-jkns-gce-1-3.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a39255b95751b1e6520d2eb03d96b79e,SystemUUID:a39255b9-5751-b1e6-520d-2eb03d96b79e,BootID:8b2e74b0-5884-4e94-8d97-d70b14c4d7c4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 20:33:49.660: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-qdgj Jan 29 20:33:49.708: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-qdgj Jan 29 20:33:49.779: INFO: konnectivity-agent-wh8g5 started at 2023-01-29 19:59:09 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.779: INFO: Container konnectivity-agent ready: false, restart count 6 Jan 29 20:33:49.779: INFO: kube-proxy-bootstrap-e2e-minion-group-qdgj started at 2023-01-29 19:58:53 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.779: INFO: Container kube-proxy ready: true, restart count 10 Jan 29 20:33:49.779: INFO: l7-default-backend-8549d69d99-wxpff started at 2023-01-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.779: INFO: Container default-http-backend ready: true, restart count 3 Jan 29 20:33:49.779: INFO: volume-snapshot-controller-0 started at 2023-01-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.779: INFO: Container volume-snapshot-controller ready: false, restart count 15 Jan 29 20:33:49.779: INFO: coredns-6846b5b5f-dfbff started at 2023-01-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.779: INFO: Container coredns ready: true, restart count 11 Jan 29 20:33:49.779: INFO: kube-dns-autoscaler-5f6455f985-msh27 started at 2023-01-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.779: INFO: Container autoscaler ready: true, restart count 6 Jan 29 20:33:49.779: INFO: metadata-proxy-v0.1-jcl2g started at 2023-01-29 19:58:54 +0000 UTC (0+2 container statuses recorded) Jan 29 20:33:49.779: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 20:33:49.779: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 20:33:49.952: INFO: Latency metrics for node bootstrap-e2e-minion-group-qdgj Jan 29 20:33:49.952: INFO: Logging node info for node bootstrap-e2e-minion-group-tq0k Jan 29 20:33:49.995: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-tq0k a9bb0bc5-ef37-4d6a-942e-813e73ebde7c 4217 0 2023-01-29 19:58:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-tq0k kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 19:58:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 20:16:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 20:16:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 20:32:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 20:32:20 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-1-3/us-west1-b/bootstrap-e2e-minion-group-tq0k,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 19:59:08 +0000 UTC,LastTransitionTime:2023-01-29 19:59:08 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 20:32:20 +0000 UTC,LastTransitionTime:2023-01-29 20:16:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 20:32:20 +0000 UTC,LastTransitionTime:2023-01-29 20:16:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 20:32:20 +0000 UTC,LastTransitionTime:2023-01-29 20:16:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 20:32:20 +0000 UTC,LastTransitionTime:2023-01-29 20:16:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.105.126.211,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-tq0k.c.k8s-jkns-gce-1-3.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-tq0k.c.k8s-jkns-gce-1-3.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:66bdc05722af80e8318de3c02e7918c3,SystemUUID:66bdc057-22af-80e8-318d-e3c02e7918c3,BootID:96846aaa-dd3b-49c0-98d8-5eeeec3592e2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 20:33:49.996: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-tq0k Jan 29 20:33:50.043: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-tq0k Jan 29 20:33:50.110: INFO: kube-proxy-bootstrap-e2e-minion-group-tq0k started at 2023-01-29 19:58:52 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:50.110: INFO: Container kube-proxy ready: false, restart count 12 Jan 29 20:33:50.110: INFO: metadata-proxy-v0.1-ggkjj started at 2023-01-29 19:58:53 +0000 UTC (0+2 container statuses recorded) Jan 29 20:33:50.110: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 20:33:50.110: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 20:33:50.110: INFO: konnectivity-agent-4nk68 started at 2023-01-29 19:59:09 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:50.110: INFO: Container konnectivity-agent ready: true, restart count 11 Jan 29 20:33:50.110: INFO: coredns-6846b5b5f-7226v started at 2023-01-29 19:59:16 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:50.110: INFO: Container coredns ready: true, restart count 9 Jan 29 20:33:50.285: INFO: Latency metrics for node bootstrap-e2e-minion-group-tq0k END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 20:33:50.285 (1.504s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 20:33:50.285 (1.504s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 20:33:50.285 STEP: Destroying namespace "reboot-4080" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 20:33:50.286 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 20:33:50.331 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 20:33:50.331 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 20:33:50.331 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] wait for service account "default" in namespace "reboot-4080": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 20:30:48.422from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:28:45.765 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:28:45.765 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:28:45.765 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 20:28:45.765 Jan 29 20:28:45.765: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 20:28:45.766 Jan 29 20:30:48.422: INFO: Unexpected error: <*fmt.wrapError | 0xc0028b0000>: { msg: "wait for service account \"default\" in namespace \"reboot-4080\": timed out waiting for the condition", err: <*errors.errorString | 0xc000207ca0>{ s: "timed out waiting for the condition", }, } [FAILED] wait for service account "default" in namespace "reboot-4080": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 20:30:48.422 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:30:48.422 (2m2.657s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 20:30:48.422 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 20:30:48.422 Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-7226v to bootstrap-e2e-minion-group-tq0k Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.041440764s (1.04151063s including waiting) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-7226v_kube-system(50c9e097-5b0f-4df8-906b-d031ff7e5d85) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": dial tcp 10.64.2.6:8181: connect: connection refused Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-7226v Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-7226v Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-7226v_kube-system(50c9e097-5b0f-4df8-906b-d031ff7e5d85) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: Get "http://10.64.2.16:8181/ready": dial tcp 10.64.2.16:8181: connect: connection refused Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: Get "http://10.64.2.18:8181/ready": dial tcp 10.64.2.18:8181: connect: connection refused Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-dfbff to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.662889684s (2.662899979s including waiting) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "http://10.64.3.5:8181/ready": dial tcp 10.64.3.5:8181: connect: connection refused Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-dfbff Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "http://10.64.3.15:8181/ready": dial tcp 10.64.3.15:8181: connect: connection refused Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-dfbff_kube-system(56a8d266-9fa7-4aaf-b9dd-ddc06dee7b8e) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "http://10.64.3.22:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-dfbff Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container coredns Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-dfbff_kube-system(56a8d266-9fa7-4aaf-b9dd-ddc06dee7b8e) Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-dfbff Jan 29 20:30:48.633: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-7226v Jan 29 20:30:48.633: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 20:30:48.633: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 20:30:48.633: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 20:30:48.633: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_90a62 became leader Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_56f11 became leader Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_8ba66 became leader Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ab7d9 became leader Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_fbf7 became leader Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_c6e94 became leader Jan 29 20:30:48.633: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_8a973 became leader Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-4nk68 to bootstrap-e2e-minion-group-tq0k Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 668.677173ms (668.692909ms including waiting) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8093/healthz": dial tcp 10.64.2.5:8093: connect: network is unreachable Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-4nk68_kube-system(9618808b-f13f-4c68-85f0-0604438645d3) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Liveness probe failed: Get "http://10.64.2.9:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-4nk68_kube-system(9618808b-f13f-4c68-85f0-0604438645d3) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-cd6h5 to bootstrap-e2e-minion-group-9w8s Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 663.587649ms (663.598454ms including waiting) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-cd6h5_kube-system(7ee8917e-685a-4438-ae1f-31d3475142e7) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-cd6h5_kube-system(7ee8917e-685a-4438-ae1f-31d3475142e7) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-wh8g5 to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.480831038s (1.480840227s including waiting) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container konnectivity-agent Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-wh8g5_kube-system(7a8f5ba8-53f9-4149-b38f-7c10aa331632) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Liveness probe failed: Get "http://10.64.3.20:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 20:30:48.633: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-wh8g5 Jan 29 20:30:48.633: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-cd6h5 Jan 29 20:30:48.633: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-4nk68 Jan 29 20:30:48.633: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 20:30:48.633: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 20:30:48.633: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 20:30:48.633: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 20:30:48.633: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 20:30:48.633: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 20:30:48.633: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 20:30:48.633: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 20:30:48.633: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 20:30:48.633: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 20:30:48.633: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 20:30:48.633: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 20:30:48.633: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 20:30:48.633: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 20:30:48.633: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.633: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 20:30:48.633: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 20:30:48.633: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 20:30:48.633: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 20:30:48.633: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_5cb0b339-27fa-478a-a12b-f3e084d9ff7a became leader Jan 29 20:30:48.633: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_896d20be-ed11-4ad6-ba6f-aeff112d6cdf became leader Jan 29 20:30:48.633: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_63955417-795f-4dea-b69f-0c2330df6065 became leader Jan 29 20:30:48.633: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ae362da5-d8bb-4dee-8d15-074d6258d290 became leader Jan 29 20:30:48.633: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c3dc2798-e744-4921-97b3-0c8487d60e65 became leader Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-msh27 to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.772576572s (2.77258737s including waiting) Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-msh27 Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-msh27_kube-system(c36a8737-0bbd-47ac-8331-9bb067fda14a) Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-msh27 Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container autoscaler Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-msh27_kube-system(c36a8737-0bbd-47ac-8331-9bb067fda14a) Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-msh27 Jan 29 20:30:48.633: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9w8s_kube-system(41c8500189f52bcbb0d902b75d8c693f) Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9w8s_kube-system(41c8500189f52bcbb0d902b75d8c693f) Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9w8s_kube-system(41c8500189f52bcbb0d902b75d8c693f) Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qdgj_kube-system(d91ce8a7f13c5fdfeaaa986d0982d773) Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container kube-proxy Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qdgj_kube-system(d91ce8a7f13c5fdfeaaa986d0982d773) Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.633: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qdgj_kube-system(d91ce8a7f13c5fdfeaaa986d0982d773) Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-tq0k_kube-system(0bf55a39319402a64119797ff480665f) Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-tq0k_kube-system(0bf55a39319402a64119797ff480665f) Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container kube-proxy Jan 29 20:30:48.634: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-tq0k_kube-system(0bf55a39319402a64119797ff480665f) Jan 29 20:30:48.634: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:30:48.634: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 20:30:48.634: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 20:30:48.634: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 20:30:48.634: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 20:30:48.634: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_908ace71-8fd9-4871-8936-5aab7c5cfed3 became leader Jan 29 20:30:48.634: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_978a61b3-7079-42ae-9f59-cf7b479348e3 became leader Jan 29 20:30:48.634: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_34f033d2-db64-4ab7-af5f-d35e0c069db5 became leader Jan 29 20:30:48.634: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_33ec300a-9e85-4dd8-be41-6a41765bbb91 became leader Jan 29 20:30:48.634: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ec577213-5b4f-40af-9d4a-7d6d74c43090 became leader Jan 29 20:30:48.634: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_f3f60a97-a50c-44d4-96ac-93a72ec88765 became leader Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-wxpff to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.193873035s (1.193895245s including waiting) Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container default-http-backend Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container default-http-backend Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-wxpff Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container default-http-backend Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container default-http-backend Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Liveness probe failed: Get "http://10.64.3.14:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-wxpff Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container default-http-backend Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container default-http-backend Jan 29 20:30:48.634: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-wxpff Jan 29 20:30:48.634: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 20:30:48.634: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 20:30:48.634: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 20:30:48.634: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 20:30:48.634: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 20:30:48.634: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 20:30:48.634: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5nlck to bootstrap-e2e-minion-group-9w8s Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 793.012358ms (793.412637ms including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.021376731s (2.021411909s including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-g8pvk to bootstrap-e2e-master Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 834.33593ms (834.358292ms including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.93646676s (1.936479152s including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-ggkjj to bootstrap-e2e-minion-group-tq0k Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 815.429367ms (815.447239ms including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.922350868s (1.922366582s including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-jcl2g to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 771.095227ms (771.126697ms including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.922287469s (1.92232627s including waiting) Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metadata-proxy Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container prometheus-to-sd-exporter Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5nlck Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-g8pvk Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-ggkjj Jan 29 20:30:48.634: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-jcl2g Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-4pd7g to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.362079376s (3.362094624s including waiting) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.97442928s (2.974455307s including waiting) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "https://10.64.3.9:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Liveness probe failed: Get "https://10.64.3.9:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-4pd7g Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-4pd7g Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-t82lt to bootstrap-e2e-minion-group-9w8s Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.394960598s (1.395000082s including waiting) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.053276162s (1.053291079s including waiting) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": dial tcp 10.64.0.3:10250: connect: connection refused Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": dial tcp 10.64.0.3:10250: connect: connection refused Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.4:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-t82lt_kube-system(e9e62670-bc8e-4962-b73b-5c0a63921679) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": dial tcp 10.64.0.5:10250: connect: connection refused Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.5:10250/livez": read tcp 10.64.0.1:47518->10.64.0.5:10250: read: connection reset by peer Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-t82lt Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-t82lt Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.14:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.14:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server-nanny Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.14:10250/livez": dial tcp 10.64.0.14:10250: connect: connection refused Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-t82lt_kube-system(e9e62670-bc8e-4962-b73b-5c0a63921679) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-t82lt_kube-system(e9e62670-bc8e-4962-b73b-5c0a63921679) Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-t82lt Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 20:30:48.634: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-qdgj Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.592104829s (3.592134801s including waiting) Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(29f0150a-fdb7-4357-b072-d77b38c99300) Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(29f0150a-fdb7-4357-b072-d77b38c99300) Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container volume-snapshot-controller Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(29f0150a-fdb7-4357-b072-d77b38c99300) Jan 29 20:30:48.634: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 20:30:48.634 (212ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 20:30:48.634 Jan 29 20:30:48.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 29 20:30:48.682: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:30:50.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:30:52.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:30:54.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:30:56.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:30:58.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:00.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:02.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:04.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:06.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:08.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:10.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:12.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:14.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:16.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:18.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:20.736: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:22.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:24.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:26.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:28.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:30.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:32.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:34.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:36.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:38.745: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:40.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:42.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:44.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:46.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:48.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:50.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:52.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:54.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:56.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:31:58.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:00.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:02.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:04.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:09.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:10.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:12.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:14.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:16.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:18.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:20.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:22.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:24.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:26.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:28.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:30.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:32.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:34.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:36.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:38.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:40.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:42.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:44.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:46.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:48.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:50.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:52.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:54.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:56.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:32:58.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:00.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:02.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:04.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:06.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:08.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:28.138: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:28.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:30.737: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:32.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:34.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:36.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:38.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:40.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:42.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:44.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:46.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:48.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:33:48.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 20:33:48.781 (3m0.147s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 20:33:48.781 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 20:33:48.781 STEP: Collecting events from namespace "reboot-4080". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 20:33:48.781 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 20:33:48.822 Jan 29 20:33:48.864: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 20:33:48.864: INFO: Jan 29 20:33:48.910: INFO: Logging node info for node bootstrap-e2e-master Jan 29 20:33:48.952: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master d4485f0c-f2c7-49e9-810e-6a5d0ed1fb44 4076 0 2023-01-29 19:58:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 19:58:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 19:59:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 19:59:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 20:30:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-1-3/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 19:59:08 +0000 UTC,LastTransitionTime:2023-01-29 19:59:08 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 20:30:28 +0000 UTC,LastTransitionTime:2023-01-29 19:58:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 20:30:28 +0000 UTC,LastTransitionTime:2023-01-29 19:58:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 20:30:28 +0000 UTC,LastTransitionTime:2023-01-29 19:58:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 20:30:28 +0000 UTC,LastTransitionTime:2023-01-29 19:59:12 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.227.160.196,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gce-1-3.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gce-1-3.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:309bd0260e97f67bc6fae1aeb8b97dc4,SystemUUID:309bd026-0e97-f67b-c6fa-e1aeb8b97dc4,BootID:e92b9e65-25ee-4c70-8707-35ebd1478373,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 20:33:48.953: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 20:33:49.013: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 20:33:49.078: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 19:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container kube-controller-manager ready: false, restart count 7 Jan 29 20:33:49.078: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 19:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container etcd-container ready: true, restart count 4 Jan 29 20:33:49.078: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 19:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 29 20:33:49.078: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 19:58:24 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 29 20:33:49.078: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 19:58:24 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container l7-lb-controller ready: false, restart count 11 Jan 29 20:33:49.078: INFO: metadata-proxy-v0.1-g8pvk started at 2023-01-29 19:59:22 +0000 UTC (0+2 container statuses recorded) Jan 29 20:33:49.078: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 20:33:49.078: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 20:33:49.078: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 19:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container kube-apiserver ready: true, restart count 3 Jan 29 20:33:49.078: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 19:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container etcd-container ready: true, restart count 4 Jan 29 20:33:49.078: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 19:58:05 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.078: INFO: Container kube-scheduler ready: false, restart count 6 Jan 29 20:33:49.276: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 20:33:49.276: INFO: Logging node info for node bootstrap-e2e-minion-group-9w8s Jan 29 20:33:49.320: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-9w8s 7d4d6910-cd1a-4f88-b377-ba2e361d6f58 4181 0 2023-01-29 19:58:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-9w8s kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 19:58:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 20:15:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 20:16:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 20:31:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 20:31:51 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-1-3/us-west1-b/bootstrap-e2e-minion-group-9w8s,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 20:31:35 +0000 UTC,LastTransitionTime:2023-01-29 20:16:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 19:59:08 +0000 UTC,LastTransitionTime:2023-01-29 19:59:08 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 20:31:51 +0000 UTC,LastTransitionTime:2023-01-29 20:16:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 20:31:51 +0000 UTC,LastTransitionTime:2023-01-29 20:16:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 20:31:51 +0000 UTC,LastTransitionTime:2023-01-29 20:16:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 20:31:51 +0000 UTC,LastTransitionTime:2023-01-29 20:16:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.233.143.195,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-9w8s.c.k8s-jkns-gce-1-3.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-9w8s.c.k8s-jkns-gce-1-3.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8a9e577e91a0c299e671ec979c645e19,SystemUUID:8a9e577e-91a0-c299-e671-ec979c645e19,BootID:8214d8ec-eacd-4ca8-88ac-0eea06d64bd8,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 20:33:49.320: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-9w8s Jan 29 20:33:49.368: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-9w8s Jan 29 20:33:49.435: INFO: kube-proxy-bootstrap-e2e-minion-group-9w8s started at 2023-01-29 19:58:51 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.435: INFO: Container kube-proxy ready: true, restart count 8 Jan 29 20:33:49.435: INFO: metadata-proxy-v0.1-5nlck started at 2023-01-29 19:58:52 +0000 UTC (0+2 container statuses recorded) Jan 29 20:33:49.435: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 20:33:49.435: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 20:33:49.435: INFO: konnectivity-agent-cd6h5 started at 2023-01-29 19:59:09 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.435: INFO: Container konnectivity-agent ready: false, restart count 10 Jan 29 20:33:49.435: INFO: metrics-server-v0.5.2-867b8754b9-t82lt started at 2023-01-29 20:00:19 +0000 UTC (0+2 container statuses recorded) Jan 29 20:33:49.435: INFO: Container metrics-server ready: false, restart count 12 Jan 29 20:33:49.435: INFO: Container metrics-server-nanny ready: false, restart count 12 Jan 29 20:33:49.617: INFO: Latency metrics for node bootstrap-e2e-minion-group-9w8s Jan 29 20:33:49.617: INFO: Logging node info for node bootstrap-e2e-minion-group-qdgj Jan 29 20:33:49.660: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-qdgj 7e95d761-d26d-4801-b6e9-33b590a2c2d6 4161 0 2023-01-29 19:58:53 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-qdgj kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 19:58:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 20:20:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 20:21:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {node-problem-detector Update v1 2023-01-29 20:31:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 20:31:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-1-3/us-west1-b/bootstrap-e2e-minion-group-qdgj,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2023-01-29 20:20:45 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 20:31:24 +0000 UTC,LastTransitionTime:2023-01-29 20:21:22 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 19:59:08 +0000 UTC,LastTransitionTime:2023-01-29 19:59:08 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 20:31:38 +0000 UTC,LastTransitionTime:2023-01-29 20:21:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 20:31:38 +0000 UTC,LastTransitionTime:2023-01-29 20:21:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 20:31:38 +0000 UTC,LastTransitionTime:2023-01-29 20:21:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 20:31:38 +0000 UTC,LastTransitionTime:2023-01-29 20:21:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:35.197.112.91,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-qdgj.c.k8s-jkns-gce-1-3.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-qdgj.c.k8s-jkns-gce-1-3.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a39255b95751b1e6520d2eb03d96b79e,SystemUUID:a39255b9-5751-b1e6-520d-2eb03d96b79e,BootID:8b2e74b0-5884-4e94-8d97-d70b14c4d7c4,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 20:33:49.660: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-qdgj Jan 29 20:33:49.708: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-qdgj Jan 29 20:33:49.779: INFO: konnectivity-agent-wh8g5 started at 2023-01-29 19:59:09 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.779: INFO: Container konnectivity-agent ready: false, restart count 6 Jan 29 20:33:49.779: INFO: kube-proxy-bootstrap-e2e-minion-group-qdgj started at 2023-01-29 19:58:53 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.779: INFO: Container kube-proxy ready: true, restart count 10 Jan 29 20:33:49.779: INFO: l7-default-backend-8549d69d99-wxpff started at 2023-01-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.779: INFO: Container default-http-backend ready: true, restart count 3 Jan 29 20:33:49.779: INFO: volume-snapshot-controller-0 started at 2023-01-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.779: INFO: Container volume-snapshot-controller ready: false, restart count 15 Jan 29 20:33:49.779: INFO: coredns-6846b5b5f-dfbff started at 2023-01-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.779: INFO: Container coredns ready: true, restart count 11 Jan 29 20:33:49.779: INFO: kube-dns-autoscaler-5f6455f985-msh27 started at 2023-01-29 19:59:08 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:49.779: INFO: Container autoscaler ready: true, restart count 6 Jan 29 20:33:49.779: INFO: metadata-proxy-v0.1-jcl2g started at 2023-01-29 19:58:54 +0000 UTC (0+2 container statuses recorded) Jan 29 20:33:49.779: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 20:33:49.779: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 20:33:49.952: INFO: Latency metrics for node bootstrap-e2e-minion-group-qdgj Jan 29 20:33:49.952: INFO: Logging node info for node bootstrap-e2e-minion-group-tq0k Jan 29 20:33:49.995: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-tq0k a9bb0bc5-ef37-4d6a-942e-813e73ebde7c 4217 0 2023-01-29 19:58:52 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-tq0k kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 19:58:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 20:16:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 20:16:57 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 20:32:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 20:32:20 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gce-1-3/us-west1-b/bootstrap-e2e-minion-group-tq0k,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 20:32:00 +0000 UTC,LastTransitionTime:2023-01-29 20:16:56 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 19:59:08 +0000 UTC,LastTransitionTime:2023-01-29 19:59:08 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 20:32:20 +0000 UTC,LastTransitionTime:2023-01-29 20:16:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 20:32:20 +0000 UTC,LastTransitionTime:2023-01-29 20:16:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 20:32:20 +0000 UTC,LastTransitionTime:2023-01-29 20:16:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 20:32:20 +0000 UTC,LastTransitionTime:2023-01-29 20:16:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.105.126.211,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-tq0k.c.k8s-jkns-gce-1-3.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-tq0k.c.k8s-jkns-gce-1-3.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:66bdc05722af80e8318de3c02e7918c3,SystemUUID:66bdc057-22af-80e8-318d-e3c02e7918c3,BootID:96846aaa-dd3b-49c0-98d8-5eeeec3592e2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 20:33:49.996: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-tq0k Jan 29 20:33:50.043: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-tq0k Jan 29 20:33:50.110: INFO: kube-proxy-bootstrap-e2e-minion-group-tq0k started at 2023-01-29 19:58:52 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:50.110: INFO: Container kube-proxy ready: false, restart count 12 Jan 29 20:33:50.110: INFO: metadata-proxy-v0.1-ggkjj started at 2023-01-29 19:58:53 +0000 UTC (0+2 container statuses recorded) Jan 29 20:33:50.110: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 20:33:50.110: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 20:33:50.110: INFO: konnectivity-agent-4nk68 started at 2023-01-29 19:59:09 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:50.110: INFO: Container konnectivity-agent ready: true, restart count 11 Jan 29 20:33:50.110: INFO: coredns-6846b5b5f-7226v started at 2023-01-29 19:59:16 +0000 UTC (0+1 container statuses recorded) Jan 29 20:33:50.110: INFO: Container coredns ready: true, restart count 9 Jan 29 20:33:50.285: INFO: Latency metrics for node bootstrap-e2e-minion-group-tq0k END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 20:33:50.285 (1.504s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 20:33:50.285 (1.504s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 20:33:50.285 STEP: Destroying namespace "reboot-4080" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 20:33:50.286 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 20:33:50.331 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 20:33:50.331 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 20:33:50.331 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 20:07:40.009from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:07:09.888 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:07:09.888 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:07:09.888 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 20:07:09.888 Jan 29 20:07:09.888: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 20:07:09.89 Jan 29 20:07:09.929: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:11.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:13.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:15.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:17.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:19.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:21.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:23.970: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:25.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:27.971: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:29.970: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:31.970: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:33.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:35.971: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:37.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:39.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:40.009: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:40.009: INFO: Unexpected error: <*errors.errorString | 0xc000207ca0>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 20:07:40.009 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:07:40.009 (30.121s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 20:07:40.009 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 20:07:40.009 Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-7226v to bootstrap-e2e-minion-group-tq0k Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.041440764s (1.04151063s including waiting) Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-7226v_kube-system(50c9e097-5b0f-4df8-906b-d031ff7e5d85) Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": dial tcp 10.64.2.6:8181: connect: connection refused Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-dfbff to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.662889684s (2.662899979s including waiting) Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "http://10.64.3.5:8181/ready": dial tcp 10.64.3.5:8181: connect: connection refused Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-dfbff Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "http://10.64.3.15:8181/ready": dial tcp 10.64.3.15:8181: connect: connection refused Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-dfbff_kube-system(56a8d266-9fa7-4aaf-b9dd-ddc06dee7b8e) Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-dfbff Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-7226v Jan 29 20:07:47.657: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 20:07:47.657: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 20:07:47.657: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 20:07:47.657: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 20:07:47.657: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 20:07:47.657: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 20:07:47.657: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 20:07:47.657: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_90a62 became leader Jan 29 20:07:47.657: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_56f11 became leader Jan 29 20:07:47.657: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_8ba66 became leader Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-4nk68 to bootstrap-e2e-minion-group-tq0k Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 668.677173ms (668.692909ms including waiting) Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8093/healthz": dial tcp 10.64.2.5:8093: connect: network is unreachable Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-4nk68_kube-system(9618808b-f13f-4c68-85f0-0604438645d3) Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-cd6h5 to bootstrap-e2e-minion-group-9w8s Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 663.587649ms (663.598454ms including waiting) Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-wh8g5 to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.480831038s (1.480840227s including waiting) Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-wh8g5_kube-system(7a8f5ba8-53f9-4149-b38f-7c10aa331632) Jan 29 20:07:47.657: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-wh8g5 Jan 29 20:07:47.657: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-cd6h5 Jan 29 20:07:47.657: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-4nk68 Jan 29 20:07:47.657: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 20:07:47.657: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 20:07:47.657: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 20:07:47.657: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 20:07:47.657: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_5cb0b339-27fa-478a-a12b-f3e084d9ff7a became leader Jan 29 20:07:47.657: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_896d20be-ed11-4ad6-ba6f-aeff112d6cdf became leader Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-msh27 to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.772576572s (2.77258737s including waiting) Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container autoscaler Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container autoscaler Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-msh27 Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container autoscaler Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container autoscaler Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-msh27 Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9w8s_kube-system(41c8500189f52bcbb0d902b75d8c693f) Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qdgj_kube-system(d91ce8a7f13c5fdfeaaa986d0982d773) Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qdgj_kube-system(d91ce8a7f13c5fdfeaaa986d0982d773) Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-tq0k_kube-system(0bf55a39319402a64119797ff480665f) Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-tq0k_kube-system(0bf55a39319402a64119797ff480665f) Jan 29 20:07:47.657: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 20:07:47.657: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 20:07:47.657: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 20:07:47.657: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 20:07:47.657: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_908ace71-8fd9-4871-8936-5aab7c5cfed3 became leader Jan 29 20:07:47.657: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_978a61b3-7079-42ae-9f59-cf7b479348e3 became leader Jan 29 20:07:47.657: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_34f033d2-db64-4ab7-af5f-d35e0c069db5 became leader Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-wxpff to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.193873035s (1.193895245s including waiting) Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container default-http-backend Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container default-http-backend Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-wxpff Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container default-http-backend Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container default-http-backend Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-wxpff Jan 29 20:07:47.657: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 20:07:47.657: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 20:07:47.657: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 20:07:47.657: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 20:07:47.657: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 20:07:47.657: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 20:07:47.657: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5nlck to bootstrap-e2e-minion-group-9w8s Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 793.012358ms (793.412637ms including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.021376731s (2.021411909s including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-g8pvk to bootstrap-e2e-master Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 834.33593ms (834.358292ms including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.93646676s (1.936479152s including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-ggkjj to bootstrap-e2e-minion-group-tq0k Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 815.429367ms (815.447239ms including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.922350868s (1.922366582s including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-jcl2g to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 771.095227ms (771.126697ms including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.922287469s (1.92232627s including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metadata-proxy Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metadata-proxy Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5nlck Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-g8pvk Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-ggkjj Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-jcl2g Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-4pd7g to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.362079376s (3.362094624s including waiting) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.97442928s (2.974455307s including waiting) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "https://10.64.3.9:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Liveness probe failed: Get "https://10.64.3.9:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-4pd7g Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-4pd7g Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-t82lt to bootstrap-e2e-minion-group-9w8s Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.394960598s (1.395000082s including waiting) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.053276162s (1.053291079s including waiting) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": dial tcp 10.64.0.3:10250: connect: connection refused Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": dial tcp 10.64.0.3:10250: connect: connection refused Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.4:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-t82lt_kube-system(e9e62670-bc8e-4962-b73b-5c0a63921679) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": dial tcp 10.64.0.5:10250: connect: connection refused Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.5:10250/livez": read tcp 10.64.0.1:47518->10.64.0.5:10250: read: connection reset by peer Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-t82lt Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.592104829s (3.592134801s including waiting) Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container volume-snapshot-controller Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container volume-snapshot-controller Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container volume-snapshot-controller Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(29f0150a-fdb7-4357-b072-d77b38c99300) Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container volume-snapshot-controller Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container volume-snapshot-controller Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container volume-snapshot-controller Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(29f0150a-fdb7-4357-b072-d77b38c99300) Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 20:07:47.658 (7.649s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 20:07:47.658 Jan 29 20:07:47.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 20:07:47.927 (270ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 20:07:47.927 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 20:07:47.928 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 20:07:47.928 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 20:07:47.928 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 20:07:47.928 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 20:07:47.928 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 20:07:47.928 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 20:07:47.928 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 20:07:40.009from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:07:09.888 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:07:09.888 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:07:09.888 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 20:07:09.888 Jan 29 20:07:09.888: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 20:07:09.89 Jan 29 20:07:09.929: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:11.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:13.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:15.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:17.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:19.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:21.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:23.970: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:25.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:27.971: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:29.970: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:31.970: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:33.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:35.971: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:37.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:39.969: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:40.009: INFO: Unexpected error while creating namespace: Post "https://35.227.160.196/api/v1/namespaces": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:40.009: INFO: Unexpected error: <*errors.errorString | 0xc000207ca0>: { s: "timed out waiting for the condition", } [FAILED] timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 20:07:40.009 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:07:40.009 (30.121s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 20:07:40.009 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 20:07:40.009 Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-7226v to bootstrap-e2e-minion-group-tq0k Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.041440764s (1.04151063s including waiting) Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-7226v_kube-system(50c9e097-5b0f-4df8-906b-d031ff7e5d85) Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": dial tcp 10.64.2.6:8181: connect: connection refused Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-dfbff to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.662889684s (2.662899979s including waiting) Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "http://10.64.3.5:8181/ready": dial tcp 10.64.3.5:8181: connect: connection refused Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-dfbff Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "http://10.64.3.15:8181/ready": dial tcp 10.64.3.15:8181: connect: connection refused Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container coredns Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-dfbff_kube-system(56a8d266-9fa7-4aaf-b9dd-ddc06dee7b8e) Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-dfbff Jan 29 20:07:47.657: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-7226v Jan 29 20:07:47.657: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 20:07:47.657: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 20:07:47.657: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 20:07:47.657: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 20:07:47.657: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 20:07:47.657: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 20:07:47.657: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 20:07:47.657: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 20:07:47.657: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_90a62 became leader Jan 29 20:07:47.657: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_56f11 became leader Jan 29 20:07:47.657: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_8ba66 became leader Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-4nk68 to bootstrap-e2e-minion-group-tq0k Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 668.677173ms (668.692909ms including waiting) Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8093/healthz": dial tcp 10.64.2.5:8093: connect: network is unreachable Jan 29 20:07:47.657: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-4nk68_kube-system(9618808b-f13f-4c68-85f0-0604438645d3) Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-cd6h5 to bootstrap-e2e-minion-group-9w8s Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 663.587649ms (663.598454ms including waiting) Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-wh8g5 to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.480831038s (1.480840227s including waiting) Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container konnectivity-agent Jan 29 20:07:47.657: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-wh8g5_kube-system(7a8f5ba8-53f9-4149-b38f-7c10aa331632) Jan 29 20:07:47.657: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-wh8g5 Jan 29 20:07:47.657: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-cd6h5 Jan 29 20:07:47.657: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-4nk68 Jan 29 20:07:47.657: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 20:07:47.657: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 20:07:47.657: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 20:07:47.657: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 20:07:47.657: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_5cb0b339-27fa-478a-a12b-f3e084d9ff7a became leader Jan 29 20:07:47.657: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_896d20be-ed11-4ad6-ba6f-aeff112d6cdf became leader Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-msh27 to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.772576572s (2.77258737s including waiting) Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container autoscaler Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container autoscaler Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-msh27 Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container autoscaler Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container autoscaler Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-msh27 Jan 29 20:07:47.657: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9w8s_kube-system(41c8500189f52bcbb0d902b75d8c693f) Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qdgj_kube-system(d91ce8a7f13c5fdfeaaa986d0982d773) Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qdgj_kube-system(d91ce8a7f13c5fdfeaaa986d0982d773) Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-tq0k_kube-system(0bf55a39319402a64119797ff480665f) Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container kube-proxy Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-tq0k_kube-system(0bf55a39319402a64119797ff480665f) Jan 29 20:07:47.657: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:07:47.657: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 20:07:47.657: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 20:07:47.657: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 20:07:47.657: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 20:07:47.657: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_908ace71-8fd9-4871-8936-5aab7c5cfed3 became leader Jan 29 20:07:47.657: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_978a61b3-7079-42ae-9f59-cf7b479348e3 became leader Jan 29 20:07:47.657: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_34f033d2-db64-4ab7-af5f-d35e0c069db5 became leader Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-wxpff to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.193873035s (1.193895245s including waiting) Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container default-http-backend Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container default-http-backend Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-wxpff Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container default-http-backend Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container default-http-backend Jan 29 20:07:47.657: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-wxpff Jan 29 20:07:47.657: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 20:07:47.657: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 20:07:47.657: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 20:07:47.657: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 20:07:47.657: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 20:07:47.657: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 20:07:47.657: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5nlck to bootstrap-e2e-minion-group-9w8s Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 793.012358ms (793.412637ms including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.021376731s (2.021411909s including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-g8pvk to bootstrap-e2e-master Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 834.33593ms (834.358292ms including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.93646676s (1.936479152s including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-ggkjj to bootstrap-e2e-minion-group-tq0k Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 815.429367ms (815.447239ms including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.922350868s (1.922366582s including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-jcl2g to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 771.095227ms (771.126697ms including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metadata-proxy Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.922287469s (1.92232627s including waiting) Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.657: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metadata-proxy Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metadata-proxy Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container prometheus-to-sd-exporter Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container prometheus-to-sd-exporter Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5nlck Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-g8pvk Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-ggkjj Jan 29 20:07:47.658: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-jcl2g Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-4pd7g to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.362079376s (3.362094624s including waiting) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.97442928s (2.974455307s including waiting) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "https://10.64.3.9:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Liveness probe failed: Get "https://10.64.3.9:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-4pd7g Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-4pd7g Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-t82lt to bootstrap-e2e-minion-group-9w8s Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.394960598s (1.395000082s including waiting) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.053276162s (1.053291079s including waiting) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": dial tcp 10.64.0.3:10250: connect: connection refused Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": dial tcp 10.64.0.3:10250: connect: connection refused Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.4:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.5:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-t82lt_kube-system(e9e62670-bc8e-4962-b73b-5c0a63921679) Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.5:10250/readyz": dial tcp 10.64.0.5:10250: connect: connection refused Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.5:10250/livez": read tcp 10.64.0.1:47518->10.64.0.5:10250: read: connection reset by peer Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server-nanny Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-t82lt Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 20:07:47.658: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-qdgj Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.592104829s (3.592134801s including waiting) Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container volume-snapshot-controller Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container volume-snapshot-controller Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container volume-snapshot-controller Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(29f0150a-fdb7-4357-b072-d77b38c99300) Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container volume-snapshot-controller Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container volume-snapshot-controller Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container volume-snapshot-controller Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(29f0150a-fdb7-4357-b072-d77b38c99300) Jan 29 20:07:47.658: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 20:07:47.658 (7.649s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 20:07:47.658 Jan 29 20:07:47.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 20:07:47.927 (270ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 20:07:47.927 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 20:07:47.928 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 20:07:47.928 (0s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 20:07:47.928 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 20:07:47.928 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 20:07:47.928 (0s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 20:07:47.928 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 20:07:47.928 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sswitching\soff\sthe\snetwork\sinterface\sand\sensure\sthey\sfunction\supon\sswitch\son$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 20:07:09.712 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:04:35.069 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:04:35.069 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:04:35.069 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 20:04:35.07 Jan 29 20:04:35.070: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 20:04:35.071 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 20:04:35.207 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 20:04:35.291 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:04:35.378 (309ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 20:04:35.378 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 20:04:35.378 (0s) > Enter [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/29/23 20:04:35.378 Jan 29 20:04:35.528: INFO: Getting bootstrap-e2e-minion-group-9w8s Jan 29 20:04:35.528: INFO: Getting bootstrap-e2e-minion-group-qdgj Jan 29 20:04:35.528: INFO: Getting bootstrap-e2e-minion-group-tq0k Jan 29 20:04:35.575: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-9w8s condition Ready to be true Jan 29 20:04:35.575: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-qdgj condition Ready to be true Jan 29 20:04:35.575: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-tq0k condition Ready to be true Jan 29 20:04:35.621: INFO: Node bootstrap-e2e-minion-group-9w8s has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:04:35.621: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:04:35.621: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5nlck" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.621: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-9w8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.630: INFO: Node bootstrap-e2e-minion-group-tq0k has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:04:35.630: INFO: Node bootstrap-e2e-minion-group-qdgj has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-msh27 kube-proxy-bootstrap-e2e-minion-group-qdgj metadata-proxy-v0.1-jcl2g volume-snapshot-controller-0] Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-msh27 kube-proxy-bootstrap-e2e-minion-group-qdgj metadata-proxy-v0.1-jcl2g volume-snapshot-controller-0] Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ggkjj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-tq0k" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-msh27" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qdgj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jcl2g" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.665: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=true. Elapsed: 44.217209ms Jan 29 20:04:35.665: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.665: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=true. Elapsed: 44.435227ms Jan 29 20:04:35.666: INFO: Pod "metadata-proxy-v0.1-5nlck" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.666: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:04:35.666: INFO: Getting external IP address for bootstrap-e2e-minion-group-9w8s Jan 29 20:04:35.666: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-9w8s(35.233.143.195:22) Jan 29 20:04:35.673: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 43.853803ms Jan 29 20:04:35.674: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.677: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=true. Elapsed: 47.637705ms Jan 29 20:04:35.677: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.678: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=true. Elapsed: 47.59175ms Jan 29 20:04:35.678: INFO: Pod "metadata-proxy-v0.1-jcl2g" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.679: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=true. Elapsed: 49.051658ms Jan 29 20:04:35.679: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.679: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=true. Elapsed: 49.255787ms Jan 29 20:04:35.679: INFO: Pod "metadata-proxy-v0.1-ggkjj" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.679: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:04:35.679: INFO: Getting external IP address for bootstrap-e2e-minion-group-tq0k Jan 29 20:04:35.679: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-tq0k(34.105.126.211:22) Jan 29 20:04:35.679: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qdgj": Phase="Running", Reason="", readiness=true. Elapsed: 49.423803ms Jan 29 20:04:35.679: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qdgj" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.679: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-msh27 kube-proxy-bootstrap-e2e-minion-group-qdgj metadata-proxy-v0.1-jcl2g volume-snapshot-controller-0] Jan 29 20:04:35.679: INFO: Getting external IP address for bootstrap-e2e-minion-group-qdgj Jan 29 20:04:35.679: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-qdgj(35.197.112.91:22) Jan 29 20:04:36.211: INFO: ssh prow@35.197.112.91:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 20:04:36.211: INFO: ssh prow@35.197.112.91:22: stdout: "" Jan 29 20:04:36.211: INFO: ssh prow@35.197.112.91:22: stderr: "" Jan 29 20:04:36.211: INFO: ssh prow@35.197.112.91:22: exit code: 0 Jan 29 20:04:36.211: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-qdgj condition Ready to be false Jan 29 20:04:36.216: INFO: ssh prow@35.233.143.195:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 20:04:36.216: INFO: ssh prow@35.233.143.195:22: stdout: "" Jan 29 20:04:36.216: INFO: ssh prow@35.233.143.195:22: stderr: "" Jan 29 20:04:36.216: INFO: ssh prow@35.233.143.195:22: exit code: 0 Jan 29 20:04:36.216: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-9w8s condition Ready to be false Jan 29 20:04:36.226: INFO: ssh prow@34.105.126.211:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 20:04:36.226: INFO: ssh prow@34.105.126.211:22: stdout: "" Jan 29 20:04:36.226: INFO: ssh prow@34.105.126.211:22: stderr: "" Jan 29 20:04:36.226: INFO: ssh prow@34.105.126.211:22: exit code: 0 Jan 29 20:04:36.226: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-tq0k condition Ready to be false Jan 29 20:04:36.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:36.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:36.269: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:38.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:38.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:38.312: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:40.360: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:40.360: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:40.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:42.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:42.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:42.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:44.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:44.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:44.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:46.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:46.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:46.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:48.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:48.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:48.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:50.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:50.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:50.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:52.648: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:52.648: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:52.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:54.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:54.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:54.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:56.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:56.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:56.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:58.791: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:58.791: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:58.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:00.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:00.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:00.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:02.886: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:02.886: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:02.886: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:04.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:04.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:04.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:06.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:06.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:06.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:09.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:09.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:09.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:11.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:11.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:11.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:13.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:13.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:13.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:15.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:15.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:15.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:17.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:17.220: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:17.220: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:19.260: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:19.267: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:19.267: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:21.304: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-tq0k condition Ready to be true Jan 29 20:05:21.312: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-9w8s condition Ready to be true Jan 29 20:05:21.312: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-qdgj condition Ready to be true Jan 29 20:05:21.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:21.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:21.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:23.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:23.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:23.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:25.438: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:25.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:25.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:27.486: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:27.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:27.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:29.529: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:29.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:29.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:31.572: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:31.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:31.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:33.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:33.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:33.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:35.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:35.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:35.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:37.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:37.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:37.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:39.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:39.775: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:39.775: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:41.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:41.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:41.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:43.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:43.866: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:43.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:45.883: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:45.910: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:45.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:47.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:47.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:47.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:49.970: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:50.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:50.040: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:52.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:52.079: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:52.083: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:54.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:54.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:54.128: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:56.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:56.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:56.171: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:58.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:58.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:58.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:00.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:00.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:00.259: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:02.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:02.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:02.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:04.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:04.345: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:04.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:06.323: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:06.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:06.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:08.366: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:08.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:08.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:10.409: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:10.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:10.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:12.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:12.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:12.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:14.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:14.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:14.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:16.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:16.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:16.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:18.589: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:18.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:18.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:20.632: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:20.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:20.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:22.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:22.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:22.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:24.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:24.814: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:24.814: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:26.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:26.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:26.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:28.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:28.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:28.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:30.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:30.951: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:30.951: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:32.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:32.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:32.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:34.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:35.047: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:35.047: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:37.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:37.093: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:37.093: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:39.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:39.154: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:39.154: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:41.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:41.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:41.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:43.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:43.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:43.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:45.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:45.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:45.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:47.250: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:47.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:47.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:49.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:49.394: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:49.394: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:51.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:51.440: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-msh27 kube-proxy-bootstrap-e2e-minion-group-qdgj metadata-proxy-v0.1-jcl2g volume-snapshot-controller-0] Jan 29 20:06:51.440: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:51.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:51.440: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-msh27" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:51.440: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qdgj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:51.440: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jcl2g" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:51.485: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.904048ms Jan 29 20:06:51.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:51.486: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 45.737793ms Jan 29 20:06:51.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:51.487: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qdgj": Phase="Running", Reason="", readiness=true. Elapsed: 47.112445ms Jan 29 20:06:51.487: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qdgj" satisfied condition "running and ready, or succeeded" Jan 29 20:06:51.487: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 47.151145ms Jan 29 20:06:51.487: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:06:53.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:53.508: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:06:53.508: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5nlck" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:53.508: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-9w8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:53.536: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 2.09622409s Jan 29 20:06:53.536: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:53.537: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.097076043s Jan 29 20:06:53.537: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:53.540: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 2.099504636s Jan 29 20:06:53.540: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:06:53.554: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 45.287758ms Jan 29 20:06:53.554: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 45.358449ms Jan 29 20:06:53.554: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:53.554: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:55.434: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:06:55.434: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ggkjj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:55.434: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-tq0k" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:55.486: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=false. Elapsed: 51.585279ms Jan 29 20:06:55.486: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 51.398713ms Jan 29 20:06:55.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ggkjj' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:06:55.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:06:55.529: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.088815212s Jan 29 20:06:55.529: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:55.531: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 4.090691145s Jan 29 20:06:55.531: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:55.532: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 4.091960247s Jan 29 20:06:55.532: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:06:55.618: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 2.109167668s Jan 29 20:06:55.618: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:55.618: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 2.109152941s Jan 29 20:06:55.618: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:57.530: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.090009397s Jan 29 20:06:57.530: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:57.533: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 6.092611427s Jan 29 20:06:57.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:57.534: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 6.093662326s Jan 29 20:06:57.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:06:57.534: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 2.099573378s Jan 29 20:06:57.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:06:57.534: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=false. Elapsed: 2.099879312s Jan 29 20:06:57.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ggkjj' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:06:57.599: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 4.090237088s Jan 29 20:06:57.599: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 4.090308748s Jan 29 20:06:57.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:57.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:59.529: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.089445045s Jan 29 20:06:59.529: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:59.533: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 8.092829729s Jan 29 20:06:59.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:59.534: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=false. Elapsed: 4.100249535s Jan 29 20:06:59.534: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 4.100053867s Jan 29 20:06:59.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:06:59.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ggkjj' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:06:59.535: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 8.094260174s Jan 29 20:06:59.535: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:06:59.599: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 6.090980191s Jan 29 20:06:59.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:59.600: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 6.091094232s Jan 29 20:06:59.600: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:01.531: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.090870111s Jan 29 20:07:01.531: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:01.532: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 10.092232234s Jan 29 20:07:01.532: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:01.533: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=false. Elapsed: 6.098711082s Jan 29 20:07:01.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ggkjj' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:07:01.533: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 6.098624244s Jan 29 20:07:01.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:07:01.534: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 10.093544563s Jan 29 20:07:01.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:07:01.597: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 8.088862295s Jan 29 20:07:01.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:01.600: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 8.091135896s Jan 29 20:07:01.600: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:03.536: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.095926049s Jan 29 20:07:03.536: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:03.537: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 12.096991128s Jan 29 20:07:03.537: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:03.537: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 8.102883345s Jan 29 20:07:03.537: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:07:03.537: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 12.097052515s Jan 29 20:07:03.537: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:07:03.537: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=false. Elapsed: 8.103242509s Jan 29 20:07:03.537: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ggkjj' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:07:03.601: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 10.092550808s Jan 29 20:07:03.601: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:03.601: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 10.092641982s Jan 29 20:07:03.601: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:05.530: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.090603383s Jan 29 20:07:05.530: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:05.533: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 14.092637804s Jan 29 20:07:05.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:07:05.534: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 14.093979674s Jan 29 20:07:05.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:05.534: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=false. Elapsed: 10.100031958s Jan 29 20:07:05.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ggkjj' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:07:05.534: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=true. Elapsed: 10.099903878s Jan 29 20:07:05.534: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k" satisfied condition "running and ready, or succeeded" Jan 29 20:07:05.599: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 12.090267816s Jan 29 20:07:05.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:05.599: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 12.090483785s Jan 29 20:07:05.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:07.531: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.091143988s Jan 29 20:07:07.531: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:07.533: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 16.092629114s Jan 29 20:07:07.533: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=true. Elapsed: 12.098583756s Jan 29 20:07:07.533: INFO: Pod "metadata-proxy-v0.1-ggkjj" satisfied condition "running and ready, or succeeded" Jan 29 20:07:07.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:07.533: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 16.092501391s Jan 29 20:07:07.533: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:07:07.533: INFO: Reboot successful on node bootstrap-e2e-minion-group-tq0k Jan 29 20:07:07.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:07:07.599: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 14.090185502s Jan 29 20:07:07.599: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 14.090252602s Jan 29 20:07:07.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:07.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:09.526: INFO: Encountered non-retryable error while getting pod kube-system/kube-dns-autoscaler-5f6455f985-msh27: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-msh27": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:09.526: INFO: Pod kube-dns-autoscaler-5f6455f985-msh27 failed to be running and ready, or succeeded. Jan 29 20:07:09.526: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:09.526: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 29 20:07:09.528: INFO: Encountered non-retryable error while getting pod kube-system/metadata-proxy-v0.1-jcl2g: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-jcl2g": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:09.528: INFO: Pod metadata-proxy-v0.1-jcl2g failed to be running and ready, or succeeded. Jan 29 20:07:09.528: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-msh27 kube-proxy-bootstrap-e2e-minion-group-qdgj metadata-proxy-v0.1-jcl2g volume-snapshot-controller-0] Jan 29 20:07:09.528: INFO: Status for not ready pod kube-system/metadata-proxy-v0.1-jcl2g: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:58:54 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:05:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:04:30 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:58:53 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.138.0.4 PodIPs:[{IP:10.138.0.4}] StartTime:2023-01-29 19:58:54 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:metadata-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 20:04:29 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 19:58:55 +0000 UTC,FinishedAt:2023-01-29 20:03:35 +0000 UTC,ContainerID:containerd://cacb346a0f03b550ce0669fe18c201e67f7f95bf7beaec50bc36c6e716fd10d8,}} Ready:true RestartCount:1 Image:registry.k8s.io/metadata-proxy:v0.1.12 ImageID:registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a ContainerID:containerd://b6b6c9ec51fb0d6b19b59ef35688dfa56c14b7560b247cd8c7db62ece5e8bf3c Started:0xc004ca0687} {Name:prometheus-to-sd-exporter State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 20:04:29 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 19:58:58 +0000 UTC,FinishedAt:2023-01-29 20:03:36 +0000 UTC,ContainerID:containerd://d094a4090d516ae69fd4d36104f37b8d35d92c5ead4cda9908a11c6232a1dd7c,}} Ready:true RestartCount:1 Image:gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1 ImageID:gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 ContainerID:containerd://767b4763626c572e0e0ffef9e47da4348b5f8d80b87e7c610d477757b6aa0114 Started:0xc004ca068f}] QOSClass:Guaranteed EphemeralContainerStatuses:[]} Jan 29 20:07:09.568: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-jcl2g/metadata-proxy, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-jcl2g/log?container=metadata-proxy&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.568: INFO: Retrieving log for the last terminated container kube-system/metadata-proxy-v0.1-jcl2g/metadata-proxy, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-jcl2g/log?container=metadata-proxy&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.593: INFO: Encountered non-retryable error while getting pod kube-system/kube-proxy-bootstrap-e2e-minion-group-9w8s: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-9w8s": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:09.593: INFO: Encountered non-retryable error while getting pod kube-system/metadata-proxy-v0.1-5nlck: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-5nlck": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:09.593: INFO: Pod kube-proxy-bootstrap-e2e-minion-group-9w8s failed to be running and ready, or succeeded. Jan 29 20:07:09.593: INFO: Pod metadata-proxy-v0.1-5nlck failed to be running and ready, or succeeded. Jan 29 20:07:09.593: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:07:09.593: INFO: Status for not ready pod kube-system/metadata-proxy-v0.1-5nlck: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:58:52 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:05:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:04:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:58:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.138.0.3 PodIPs:[{IP:10.138.0.3}] StartTime:2023-01-29 19:58:52 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:metadata-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 20:04:29 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 19:58:53 +0000 UTC,FinishedAt:2023-01-29 20:03:34 +0000 UTC,ContainerID:containerd://60e70f66edffcc197ddedba0ac99d925d2caffd3043619aa2d32a4863e525aa0,}} Ready:true RestartCount:1 Image:registry.k8s.io/metadata-proxy:v0.1.12 ImageID:registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a ContainerID:containerd://ca81caa06e4a3e28d5a981bada2110429a692e3f9f36d77cdab4fbf3441777c9 Started:0xc004b2dbf7} {Name:prometheus-to-sd-exporter State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 20:04:29 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 19:58:56 +0000 UTC,FinishedAt:2023-01-29 20:03:34 +0000 UTC,ContainerID:containerd://f327ad10c432957f8349abb91ed50e8a92393b4eb55c5815cc10879bdd6434bb,}} Ready:true RestartCount:1 Image:gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1 ImageID:gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 ContainerID:containerd://ceb4c34b545efdba0e4f7830863af373bdaa14e452cd0396c9176d830ef3dcdb Started:0xc004b2dbff}] QOSClass:Guaranteed EphemeralContainerStatuses:[]} Jan 29 20:07:09.607: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-jcl2g/prometheus-to-sd-exporter, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-jcl2g/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.607: INFO: Retrieving log for the last terminated container kube-system/metadata-proxy-v0.1-jcl2g/prometheus-to-sd-exporter, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-jcl2g/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.607: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:59:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:06:37 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:06:37 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:59:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.18 PodIPs:[{IP:10.64.3.18}] StartTime:2023-01-29 19:59:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 40s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(29f0150a-fdb7-4357-b072-d77b38c99300),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 20:06:11 +0000 UTC,FinishedAt:2023-01-29 20:06:37 +0000 UTC,ContainerID:containerd://87941bfe6ef8ac3f156cdfe75f51bcaa5141fee31fcfc2799c84ce4f46b62258,}} Ready:false RestartCount:5 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://87941bfe6ef8ac3f156cdfe75f51bcaa5141fee31fcfc2799c84ce4f46b62258 Started:0xc004ca137f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 29 20:07:09.633: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-5nlck/metadata-proxy, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-5nlck/log?container=metadata-proxy&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.633: INFO: Retrieving log for the last terminated container kube-system/metadata-proxy-v0.1-5nlck/metadata-proxy, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-5nlck/log?container=metadata-proxy&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.647: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.647: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.647: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-msh27: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:59:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:05:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:04:32 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:59:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.16 PodIPs:[{IP:10.64.3.16}] StartTime:2023-01-29 19:59:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 20:04:31 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 19:59:16 +0000 UTC,FinishedAt:2023-01-29 20:03:35 +0000 UTC,ContainerID:containerd://0d7e9cc39c3d3a9e0d7632e78b186e11e57b21654bbc43adefff4759d0ee11fa,}} Ready:true RestartCount:1 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://048aa3caf14fa82452e793d2a7284725d292604dcbad5fbee6284dde2abf9601 Started:0xc004a41d17}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 20:07:09.672: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-5nlck/prometheus-to-sd-exporter, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-5nlck/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.672: INFO: Retrieving log for the last terminated container kube-system/metadata-proxy-v0.1-5nlck/prometheus-to-sd-exporter, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-5nlck/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.672: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-9w8s: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:58:51 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:05:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:00:17 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:58:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.138.0.3 PodIPs:[{IP:10.138.0.3}] StartTime:2023-01-29 19:58:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 20:04:27 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 20:00:17 +0000 UTC,FinishedAt:2023-01-29 20:03:34 +0000 UTC,ContainerID:containerd://7e9bb9de010fb9691a57f47855a7f7d34176026c20ce64976c6010841098e2c5,}} Ready:true RestartCount:3 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2 ImageID:sha256:ef97fd17575d534d8bc2960bbf1e744379f3ac6e86b9b97974e086f1516b75e5 ContainerID:containerd://20983402883a5fd9f499e22377f7183345f6103ec834ce1d692b60fa7df8b2ae Started:0xc004c36757}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 20:07:09.686: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-msh27/autoscaler, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-msh27/log?container=autoscaler&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.686: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-msh27/autoscaler, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-msh27/log?container=autoscaler&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.712: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-9w8s/kube-proxy, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-9w8s/log?container=kube-proxy&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.712: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-9w8s/kube-proxy, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-9w8s/log?container=kube-proxy&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.712: INFO: Node bootstrap-e2e-minion-group-9w8s failed reboot test. Jan 29 20:07:09.712: INFO: Node bootstrap-e2e-minion-group-qdgj failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 20:07:09.712 < Exit [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/29/23 20:07:09.712 (2m34.334s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 20:07:09.712 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 20:07:09.712 Jan 29 20:07:09.752: INFO: Unexpected error: <*url.Error | 0xc003409f50>: { Op: "Get", URL: "https://35.227.160.196/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc002e777c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0038fbb60>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 227, 160, 196], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0013fc7e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.227.160.196/api/v1/namespaces/kube-system/events": dial tcp 35.227.160.196:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/29/23 20:07:09.752 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 20:07:09.752 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 20:07:09.752 Jan 29 20:07:09.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 20:07:09.791 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 20:07:09.791 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 20:07:09.791 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 20:07:09.791 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 20:07:09.791 STEP: Collecting events from namespace "reboot-2364". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 20:07:09.791 Jan 29 20:07:09.831: INFO: Unexpected error: failed to list events in namespace "reboot-2364": <*url.Error | 0xc0038fbb90>: { Op: "Get", URL: "https://35.227.160.196/api/v1/namespaces/reboot-2364/events", Err: <*net.OpError | 0xc000a78410>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0039692c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 227, 160, 196], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0005eee60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 20:07:09.831 (40ms) [FAILED] failed to list events in namespace "reboot-2364": Get "https://35.227.160.196/api/v1/namespaces/reboot-2364/events": dial tcp 35.227.160.196:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/29/23 20:07:09.831 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 20:07:09.831 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 20:07:09.831 STEP: Destroying namespace "reboot-2364" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 20:07:09.831 [FAILED] Couldn't delete ns: "reboot-2364": Delete "https://35.227.160.196/api/v1/namespaces/reboot-2364": dial tcp 35.227.160.196:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.227.160.196/api/v1/namespaces/reboot-2364", Err:(*net.OpError)(0xc002e77ea0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/29/23 20:07:09.871 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 20:07:09.871 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 20:07:09.871 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 20:07:09.871 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sswitching\soff\sthe\snetwork\sinterface\sand\sensure\sthey\sfunction\supon\sswitch\son$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 20:07:09.712 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:04:35.069 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:04:35.069 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:04:35.069 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 20:04:35.07 Jan 29 20:04:35.070: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 20:04:35.071 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 20:04:35.207 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 20:04:35.291 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:04:35.378 (309ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 20:04:35.378 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 20:04:35.378 (0s) > Enter [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/29/23 20:04:35.378 Jan 29 20:04:35.528: INFO: Getting bootstrap-e2e-minion-group-9w8s Jan 29 20:04:35.528: INFO: Getting bootstrap-e2e-minion-group-qdgj Jan 29 20:04:35.528: INFO: Getting bootstrap-e2e-minion-group-tq0k Jan 29 20:04:35.575: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-9w8s condition Ready to be true Jan 29 20:04:35.575: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-qdgj condition Ready to be true Jan 29 20:04:35.575: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-tq0k condition Ready to be true Jan 29 20:04:35.621: INFO: Node bootstrap-e2e-minion-group-9w8s has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:04:35.621: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:04:35.621: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5nlck" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.621: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-9w8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.630: INFO: Node bootstrap-e2e-minion-group-tq0k has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:04:35.630: INFO: Node bootstrap-e2e-minion-group-qdgj has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-msh27 kube-proxy-bootstrap-e2e-minion-group-qdgj metadata-proxy-v0.1-jcl2g volume-snapshot-controller-0] Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-msh27 kube-proxy-bootstrap-e2e-minion-group-qdgj metadata-proxy-v0.1-jcl2g volume-snapshot-controller-0] Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ggkjj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-tq0k" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-msh27" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qdgj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.630: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jcl2g" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:04:35.665: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=true. Elapsed: 44.217209ms Jan 29 20:04:35.665: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.665: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=true. Elapsed: 44.435227ms Jan 29 20:04:35.666: INFO: Pod "metadata-proxy-v0.1-5nlck" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.666: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:04:35.666: INFO: Getting external IP address for bootstrap-e2e-minion-group-9w8s Jan 29 20:04:35.666: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-9w8s(35.233.143.195:22) Jan 29 20:04:35.673: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 43.853803ms Jan 29 20:04:35.674: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.677: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=true. Elapsed: 47.637705ms Jan 29 20:04:35.677: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.678: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=true. Elapsed: 47.59175ms Jan 29 20:04:35.678: INFO: Pod "metadata-proxy-v0.1-jcl2g" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.679: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=true. Elapsed: 49.051658ms Jan 29 20:04:35.679: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.679: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=true. Elapsed: 49.255787ms Jan 29 20:04:35.679: INFO: Pod "metadata-proxy-v0.1-ggkjj" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.679: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:04:35.679: INFO: Getting external IP address for bootstrap-e2e-minion-group-tq0k Jan 29 20:04:35.679: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-tq0k(34.105.126.211:22) Jan 29 20:04:35.679: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qdgj": Phase="Running", Reason="", readiness=true. Elapsed: 49.423803ms Jan 29 20:04:35.679: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qdgj" satisfied condition "running and ready, or succeeded" Jan 29 20:04:35.679: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-msh27 kube-proxy-bootstrap-e2e-minion-group-qdgj metadata-proxy-v0.1-jcl2g volume-snapshot-controller-0] Jan 29 20:04:35.679: INFO: Getting external IP address for bootstrap-e2e-minion-group-qdgj Jan 29 20:04:35.679: INFO: SSH "nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-qdgj(35.197.112.91:22) Jan 29 20:04:36.211: INFO: ssh prow@35.197.112.91:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 20:04:36.211: INFO: ssh prow@35.197.112.91:22: stdout: "" Jan 29 20:04:36.211: INFO: ssh prow@35.197.112.91:22: stderr: "" Jan 29 20:04:36.211: INFO: ssh prow@35.197.112.91:22: exit code: 0 Jan 29 20:04:36.211: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-qdgj condition Ready to be false Jan 29 20:04:36.216: INFO: ssh prow@35.233.143.195:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 20:04:36.216: INFO: ssh prow@35.233.143.195:22: stdout: "" Jan 29 20:04:36.216: INFO: ssh prow@35.233.143.195:22: stderr: "" Jan 29 20:04:36.216: INFO: ssh prow@35.233.143.195:22: exit code: 0 Jan 29 20:04:36.216: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-9w8s condition Ready to be false Jan 29 20:04:36.226: INFO: ssh prow@34.105.126.211:22: command: nohup sh -c 'sleep 10; echo Shutting down eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 down | sudo tee /dev/kmsg; sleep 120; echo Starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; sleep 10; echo Retrying starting up eth0 | sudo tee /dev/kmsg; sudo ip link set eth0 up | sudo tee /dev/kmsg; echo Running dhclient | sudo tee /dev/kmsg; sudo dhclient | sudo tee /dev/kmsg; echo Starting systemd-networkd | sudo tee /dev/kmsg; sudo systemctl restart systemd-networkd | sudo tee /dev/kmsg' >/dev/null 2>&1 & Jan 29 20:04:36.226: INFO: ssh prow@34.105.126.211:22: stdout: "" Jan 29 20:04:36.226: INFO: ssh prow@34.105.126.211:22: stderr: "" Jan 29 20:04:36.226: INFO: ssh prow@34.105.126.211:22: exit code: 0 Jan 29 20:04:36.226: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-tq0k condition Ready to be false Jan 29 20:04:36.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:36.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:36.269: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:38.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:38.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:38.312: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:40.360: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:40.360: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:40.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:42.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:42.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:42.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:44.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:44.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:44.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:46.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:46.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:46.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:48.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:48.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:48.551: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:50.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:50.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:50.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:52.648: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:52.648: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:52.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:54.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:54.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:54.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:56.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:56.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:56.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:58.791: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:58.791: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:04:58.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:00.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:00.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:00.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:02.886: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:02.886: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:02.886: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:04.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:04.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:04.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:06.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:06.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:06.982: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:09.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:09.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:09.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:11.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:11.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:11.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:13.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:13.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:13.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:15.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:15.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:15.172: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:17.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:17.220: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:17.220: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:19.260: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:19.267: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:19.267: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:05:21.304: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-tq0k condition Ready to be true Jan 29 20:05:21.312: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-9w8s condition Ready to be true Jan 29 20:05:21.312: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-qdgj condition Ready to be true Jan 29 20:05:21.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:21.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:21.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:23.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:23.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:23.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:25.438: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:25.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:25.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:27.486: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:27.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:27.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:29.529: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:29.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:29.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:31.572: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:31.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:31.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:33.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:33.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:33.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:35.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:35.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:35.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:37.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:37.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:37.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:39.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:39.775: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:39.775: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:41.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:41.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:41.822: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:43.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:43.866: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:43.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:45.883: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:45.910: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:45.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:47.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:47.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:47.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:49.970: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:50.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:50.040: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:52.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:52.079: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:52.083: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:54.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:54.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:54.128: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:56.101: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:56.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:56.171: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:58.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:58.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:05:58.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:00.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:00.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:00.259: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:02.236: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:02.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:02.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:04.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:04.345: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:04.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:06.323: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:06.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:06.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:08.366: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:08.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:08.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:10.409: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:10.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:10.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:12.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:12.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:12.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:14.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:14.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:14.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:16.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:16.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:16.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:18.589: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:18.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:18.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:20.632: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:20.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:20.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:22.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:22.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:22.768: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:24.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:24.814: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:24.814: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:26.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:26.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:26.860: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:28.838: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:28.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:28.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:30.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:30.951: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:30.951: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:32.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:32.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:32.997: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:34.968: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:35.047: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:35.047: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:37.011: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:37.093: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:37.093: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:39.071: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:39.154: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:39.154: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:41.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:41.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:41.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:43.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:43.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:43.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:45.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:45.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:45.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:47.250: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:47.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:47.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:49.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:49.394: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:49.394: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:51.337: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:51.440: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-msh27 kube-proxy-bootstrap-e2e-minion-group-qdgj metadata-proxy-v0.1-jcl2g volume-snapshot-controller-0] Jan 29 20:06:51.440: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:51.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:51.440: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-msh27" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:51.440: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qdgj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:51.440: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jcl2g" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:51.485: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.904048ms Jan 29 20:06:51.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:51.486: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 45.737793ms Jan 29 20:06:51.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:51.487: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qdgj": Phase="Running", Reason="", readiness=true. Elapsed: 47.112445ms Jan 29 20:06:51.487: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qdgj" satisfied condition "running and ready, or succeeded" Jan 29 20:06:51.487: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 47.151145ms Jan 29 20:06:51.487: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:06:53.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:06:53.508: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:06:53.508: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5nlck" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:53.508: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-9w8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:53.536: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 2.09622409s Jan 29 20:06:53.536: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:53.537: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.097076043s Jan 29 20:06:53.537: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:53.540: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 2.099504636s Jan 29 20:06:53.540: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:06:53.554: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 45.287758ms Jan 29 20:06:53.554: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 45.358449ms Jan 29 20:06:53.554: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:53.554: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:55.434: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:06:55.434: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ggkjj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:55.434: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-tq0k" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:06:55.486: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=false. Elapsed: 51.585279ms Jan 29 20:06:55.486: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 51.398713ms Jan 29 20:06:55.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ggkjj' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:06:55.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:06:55.529: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.088815212s Jan 29 20:06:55.529: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:55.531: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 4.090691145s Jan 29 20:06:55.531: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:55.532: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 4.091960247s Jan 29 20:06:55.532: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:06:55.618: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 2.109167668s Jan 29 20:06:55.618: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:55.618: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 2.109152941s Jan 29 20:06:55.618: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:57.530: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.090009397s Jan 29 20:06:57.530: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:57.533: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 6.092611427s Jan 29 20:06:57.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:57.534: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 6.093662326s Jan 29 20:06:57.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:06:57.534: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 2.099573378s Jan 29 20:06:57.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:06:57.534: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=false. Elapsed: 2.099879312s Jan 29 20:06:57.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ggkjj' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:06:57.599: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 4.090237088s Jan 29 20:06:57.599: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 4.090308748s Jan 29 20:06:57.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:57.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:59.529: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.089445045s Jan 29 20:06:59.529: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:59.533: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 8.092829729s Jan 29 20:06:59.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:06:59.534: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=false. Elapsed: 4.100249535s Jan 29 20:06:59.534: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 4.100053867s Jan 29 20:06:59.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:06:59.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ggkjj' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:06:59.535: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 8.094260174s Jan 29 20:06:59.535: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:06:59.599: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 6.090980191s Jan 29 20:06:59.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:06:59.600: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 6.091094232s Jan 29 20:06:59.600: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:01.531: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.090870111s Jan 29 20:07:01.531: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:01.532: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 10.092232234s Jan 29 20:07:01.532: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:01.533: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=false. Elapsed: 6.098711082s Jan 29 20:07:01.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ggkjj' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:07:01.533: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 6.098624244s Jan 29 20:07:01.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:07:01.534: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 10.093544563s Jan 29 20:07:01.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:07:01.597: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 8.088862295s Jan 29 20:07:01.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:01.600: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 8.091135896s Jan 29 20:07:01.600: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:03.536: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.095926049s Jan 29 20:07:03.536: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:03.537: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 12.096991128s Jan 29 20:07:03.537: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:03.537: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 8.102883345s Jan 29 20:07:03.537: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:07:03.537: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 12.097052515s Jan 29 20:07:03.537: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:07:03.537: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=false. Elapsed: 8.103242509s Jan 29 20:07:03.537: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ggkjj' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:07:03.601: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 10.092550808s Jan 29 20:07:03.601: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:03.601: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 10.092641982s Jan 29 20:07:03.601: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:05.530: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.090603383s Jan 29 20:07:05.530: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:05.533: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 14.092637804s Jan 29 20:07:05.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:07:05.534: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 14.093979674s Jan 29 20:07:05.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:05.534: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=false. Elapsed: 10.100031958s Jan 29 20:07:05.534: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ggkjj' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:07:05.534: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=true. Elapsed: 10.099903878s Jan 29 20:07:05.534: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k" satisfied condition "running and ready, or succeeded" Jan 29 20:07:05.599: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 12.090267816s Jan 29 20:07:05.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:05.599: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 12.090483785s Jan 29 20:07:05.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:07.531: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.091143988s Jan 29 20:07:07.531: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:06:37 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:07.533: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=false. Elapsed: 16.092629114s Jan 29 20:07:07.533: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=true. Elapsed: 12.098583756s Jan 29 20:07:07.533: INFO: Pod "metadata-proxy-v0.1-ggkjj" satisfied condition "running and ready, or succeeded" Jan 29 20:07:07.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-msh27' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:32 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:07:07.533: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=false. Elapsed: 16.092501391s Jan 29 20:07:07.533: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:07:07.533: INFO: Reboot successful on node bootstrap-e2e-minion-group-tq0k Jan 29 20:07:07.533: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-jcl2g' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:30 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC }] Jan 29 20:07:07.599: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 14.090185502s Jan 29 20:07:07.599: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 14.090252602s Jan 29 20:07:07.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:00:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:07.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:05:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:04:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:07:09.526: INFO: Encountered non-retryable error while getting pod kube-system/kube-dns-autoscaler-5f6455f985-msh27: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-msh27": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:09.526: INFO: Pod kube-dns-autoscaler-5f6455f985-msh27 failed to be running and ready, or succeeded. Jan 29 20:07:09.526: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:09.526: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 29 20:07:09.528: INFO: Encountered non-retryable error while getting pod kube-system/metadata-proxy-v0.1-jcl2g: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-jcl2g": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:09.528: INFO: Pod metadata-proxy-v0.1-jcl2g failed to be running and ready, or succeeded. Jan 29 20:07:09.528: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-msh27 kube-proxy-bootstrap-e2e-minion-group-qdgj metadata-proxy-v0.1-jcl2g volume-snapshot-controller-0] Jan 29 20:07:09.528: INFO: Status for not ready pod kube-system/metadata-proxy-v0.1-jcl2g: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:58:54 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:05:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:04:30 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:58:53 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.138.0.4 PodIPs:[{IP:10.138.0.4}] StartTime:2023-01-29 19:58:54 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:metadata-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 20:04:29 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 19:58:55 +0000 UTC,FinishedAt:2023-01-29 20:03:35 +0000 UTC,ContainerID:containerd://cacb346a0f03b550ce0669fe18c201e67f7f95bf7beaec50bc36c6e716fd10d8,}} Ready:true RestartCount:1 Image:registry.k8s.io/metadata-proxy:v0.1.12 ImageID:registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a ContainerID:containerd://b6b6c9ec51fb0d6b19b59ef35688dfa56c14b7560b247cd8c7db62ece5e8bf3c Started:0xc004ca0687} {Name:prometheus-to-sd-exporter State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 20:04:29 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 19:58:58 +0000 UTC,FinishedAt:2023-01-29 20:03:36 +0000 UTC,ContainerID:containerd://d094a4090d516ae69fd4d36104f37b8d35d92c5ead4cda9908a11c6232a1dd7c,}} Ready:true RestartCount:1 Image:gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1 ImageID:gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 ContainerID:containerd://767b4763626c572e0e0ffef9e47da4348b5f8d80b87e7c610d477757b6aa0114 Started:0xc004ca068f}] QOSClass:Guaranteed EphemeralContainerStatuses:[]} Jan 29 20:07:09.568: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-jcl2g/metadata-proxy, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-jcl2g/log?container=metadata-proxy&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.568: INFO: Retrieving log for the last terminated container kube-system/metadata-proxy-v0.1-jcl2g/metadata-proxy, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-jcl2g/log?container=metadata-proxy&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.593: INFO: Encountered non-retryable error while getting pod kube-system/kube-proxy-bootstrap-e2e-minion-group-9w8s: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-9w8s": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:09.593: INFO: Encountered non-retryable error while getting pod kube-system/metadata-proxy-v0.1-5nlck: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-5nlck": dial tcp 35.227.160.196:443: connect: connection refused Jan 29 20:07:09.593: INFO: Pod kube-proxy-bootstrap-e2e-minion-group-9w8s failed to be running and ready, or succeeded. Jan 29 20:07:09.593: INFO: Pod metadata-proxy-v0.1-5nlck failed to be running and ready, or succeeded. Jan 29 20:07:09.593: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:07:09.593: INFO: Status for not ready pod kube-system/metadata-proxy-v0.1-5nlck: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:58:52 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:05:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:04:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:58:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.138.0.3 PodIPs:[{IP:10.138.0.3}] StartTime:2023-01-29 19:58:52 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:metadata-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 20:04:29 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 19:58:53 +0000 UTC,FinishedAt:2023-01-29 20:03:34 +0000 UTC,ContainerID:containerd://60e70f66edffcc197ddedba0ac99d925d2caffd3043619aa2d32a4863e525aa0,}} Ready:true RestartCount:1 Image:registry.k8s.io/metadata-proxy:v0.1.12 ImageID:registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a ContainerID:containerd://ca81caa06e4a3e28d5a981bada2110429a692e3f9f36d77cdab4fbf3441777c9 Started:0xc004b2dbf7} {Name:prometheus-to-sd-exporter State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 20:04:29 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 19:58:56 +0000 UTC,FinishedAt:2023-01-29 20:03:34 +0000 UTC,ContainerID:containerd://f327ad10c432957f8349abb91ed50e8a92393b4eb55c5815cc10879bdd6434bb,}} Ready:true RestartCount:1 Image:gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1 ImageID:gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 ContainerID:containerd://ceb4c34b545efdba0e4f7830863af373bdaa14e452cd0396c9176d830ef3dcdb Started:0xc004b2dbff}] QOSClass:Guaranteed EphemeralContainerStatuses:[]} Jan 29 20:07:09.607: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-jcl2g/prometheus-to-sd-exporter, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-jcl2g/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.607: INFO: Retrieving log for the last terminated container kube-system/metadata-proxy-v0.1-jcl2g/prometheus-to-sd-exporter, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-jcl2g/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.607: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:59:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:06:37 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:06:37 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:59:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.18 PodIPs:[{IP:10.64.3.18}] StartTime:2023-01-29 19:59:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 40s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(29f0150a-fdb7-4357-b072-d77b38c99300),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 20:06:11 +0000 UTC,FinishedAt:2023-01-29 20:06:37 +0000 UTC,ContainerID:containerd://87941bfe6ef8ac3f156cdfe75f51bcaa5141fee31fcfc2799c84ce4f46b62258,}} Ready:false RestartCount:5 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://87941bfe6ef8ac3f156cdfe75f51bcaa5141fee31fcfc2799c84ce4f46b62258 Started:0xc004ca137f}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 29 20:07:09.633: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-5nlck/metadata-proxy, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-5nlck/log?container=metadata-proxy&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.633: INFO: Retrieving log for the last terminated container kube-system/metadata-proxy-v0.1-5nlck/metadata-proxy, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-5nlck/log?container=metadata-proxy&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.647: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.647: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.647: INFO: Status for not ready pod kube-system/kube-dns-autoscaler-5f6455f985-msh27: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:59:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:05:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:04:32 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:59:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.3.16 PodIPs:[{IP:10.64.3.16}] StartTime:2023-01-29 19:59:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:autoscaler State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 20:04:31 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 19:59:16 +0000 UTC,FinishedAt:2023-01-29 20:03:35 +0000 UTC,ContainerID:containerd://0d7e9cc39c3d3a9e0d7632e78b186e11e57b21654bbc43adefff4759d0ee11fa,}} Ready:true RestartCount:1 Image:registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4 ImageID:registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def ContainerID:containerd://048aa3caf14fa82452e793d2a7284725d292604dcbad5fbee6284dde2abf9601 Started:0xc004a41d17}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 20:07:09.672: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-5nlck/prometheus-to-sd-exporter, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-5nlck/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.672: INFO: Retrieving log for the last terminated container kube-system/metadata-proxy-v0.1-5nlck/prometheus-to-sd-exporter, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-5nlck/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.672: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-9w8s: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:58:51 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:05:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 20:00:17 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 19:58:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.3 PodIP:10.138.0.3 PodIPs:[{IP:10.138.0.3}] StartTime:2023-01-29 19:58:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 20:04:27 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:255,Signal:0,Reason:Unknown,Message:,StartedAt:2023-01-29 20:00:17 +0000 UTC,FinishedAt:2023-01-29 20:03:34 +0000 UTC,ContainerID:containerd://7e9bb9de010fb9691a57f47855a7f7d34176026c20ce64976c6010841098e2c5,}} Ready:true RestartCount:3 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2 ImageID:sha256:ef97fd17575d534d8bc2960bbf1e744379f3ac6e86b9b97974e086f1516b75e5 ContainerID:containerd://20983402883a5fd9f499e22377f7183345f6103ec834ce1d692b60fa7df8b2ae Started:0xc004c36757}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 20:07:09.686: INFO: Retrieving log for container kube-system/kube-dns-autoscaler-5f6455f985-msh27/autoscaler, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-msh27/log?container=autoscaler&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.686: INFO: Retrieving log for the last terminated container kube-system/kube-dns-autoscaler-5f6455f985-msh27/autoscaler, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/kube-dns-autoscaler-5f6455f985-msh27/log?container=autoscaler&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.712: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-9w8s/kube-proxy, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-9w8s/log?container=kube-proxy&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.712: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-9w8s/kube-proxy, err: Get "https://35.227.160.196/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-9w8s/log?container=kube-proxy&previous=false": dial tcp 35.227.160.196:443: connect: connection refused: Jan 29 20:07:09.712: INFO: Node bootstrap-e2e-minion-group-9w8s failed reboot test. Jan 29 20:07:09.712: INFO: Node bootstrap-e2e-minion-group-qdgj failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 20:07:09.712 < Exit [It] each node by switching off the network interface and ensure they function upon switch on - test/e2e/cloud/gcp/reboot.go:115 @ 01/29/23 20:07:09.712 (2m34.334s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 20:07:09.712 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 20:07:09.712 Jan 29 20:07:09.752: INFO: Unexpected error: <*url.Error | 0xc003409f50>: { Op: "Get", URL: "https://35.227.160.196/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc002e777c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0038fbb60>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 227, 160, 196], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0013fc7e0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://35.227.160.196/api/v1/namespaces/kube-system/events": dial tcp 35.227.160.196:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/29/23 20:07:09.752 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 20:07:09.752 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 20:07:09.752 Jan 29 20:07:09.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 20:07:09.791 (39ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 20:07:09.791 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 20:07:09.791 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 20:07:09.791 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 20:07:09.791 STEP: Collecting events from namespace "reboot-2364". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 20:07:09.791 Jan 29 20:07:09.831: INFO: Unexpected error: failed to list events in namespace "reboot-2364": <*url.Error | 0xc0038fbb90>: { Op: "Get", URL: "https://35.227.160.196/api/v1/namespaces/reboot-2364/events", Err: <*net.OpError | 0xc000a78410>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc0039692c0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 35, 227, 160, 196], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0005eee60>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 20:07:09.831 (40ms) [FAILED] failed to list events in namespace "reboot-2364": Get "https://35.227.160.196/api/v1/namespaces/reboot-2364/events": dial tcp 35.227.160.196:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/29/23 20:07:09.831 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 20:07:09.831 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 20:07:09.831 STEP: Destroying namespace "reboot-2364" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 20:07:09.831 [FAILED] Couldn't delete ns: "reboot-2364": Delete "https://35.227.160.196/api/v1/namespaces/reboot-2364": dial tcp 35.227.160.196:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://35.227.160.196/api/v1/namespaces/reboot-2364", Err:(*net.OpError)(0xc002e77ea0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/29/23 20:07:09.871 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 20:07:09.871 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 20:07:09.871 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 20:07:09.871 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\striggering\skernel\spanic\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 20:25:43.658
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:13:31.831 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 20:13:31.831 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:13:31.831 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 20:13:31.831 Jan 29 20:13:31.831: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 20:13:31.832 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 20:14:59.795 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 20:14:59.885 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 20:15:00.1 (1m28.27s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 20:15:00.1 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 20:15:00.1 (0s) > Enter [It] each node by triggering kernel panic and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:109 @ 01/29/23 20:15:00.1 Jan 29 20:15:00.253: INFO: Getting bootstrap-e2e-minion-group-tq0k Jan 29 20:15:00.253: INFO: Getting bootstrap-e2e-minion-group-qdgj Jan 29 20:15:00.253: INFO: Getting bootstrap-e2e-minion-group-9w8s Jan 29 20:15:00.377: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-tq0k condition Ready to be true Jan 29 20:15:00.378: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-qdgj condition Ready to be true Jan 29 20:15:00.378: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-9w8s condition Ready to be true Jan 29 20:15:00.467: INFO: Node bootstrap-e2e-minion-group-9w8s has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:15:00.467: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:15:00.467: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5nlck" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:15:00.467: INFO: Node bootstrap-e2e-minion-group-qdgj has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-msh27 kube-proxy-bootstrap-e2e-minion-group-qdgj metadata-proxy-v0.1-jcl2g volume-snapshot-controller-0] Jan 29 20:15:00.467: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-msh27 kube-proxy-bootstrap-e2e-minion-group-qdgj metadata-proxy-v0.1-jcl2g volume-snapshot-controller-0] Jan 29 20:15:00.467: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:15:00.468: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-9w8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:15:00.468: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-msh27" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:15:00.468: INFO: Node bootstrap-e2e-minion-group-tq0k has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:15:00.468: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:15:00.468: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ggkjj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:15:00.468: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qdgj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:15:00.468: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-tq0k" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:15:00.468: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-jcl2g" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:15:00.553: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27": Phase="Running", Reason="", readiness=true. Elapsed: 85.449389ms Jan 29 20:15:00.553: INFO: Pod "kube-dns-autoscaler-5f6455f985-msh27" satisfied condition "running and ready, or succeeded" Jan 29 20:15:00.554: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 87.042897ms Jan 29 20:15:00.554: INFO: Pod "metadata-proxy-v0.1-jcl2g": Phase="Running", Reason="", readiness=true. Elapsed: 86.718879ms Jan 29 20:15:00.554: INFO: Pod "metadata-proxy-v0.1-jcl2g" satisfied condition "running and ready, or succeeded" Jan 29 20:15:00.554: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:00.558: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=true. Elapsed: 90.467743ms Jan 29 20:15:00.558: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s" satisfied condition "running and ready, or succeeded" Jan 29 20:15:00.558: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=true. Elapsed: 90.528151ms Jan 29 20:15:00.558: INFO: Pod "metadata-proxy-v0.1-ggkjj" satisfied condition "running and ready, or succeeded" Jan 29 20:15:00.558: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=true. Elapsed: 90.801369ms Jan 29 20:15:00.558: INFO: Pod "metadata-proxy-v0.1-5nlck" satisfied condition "running and ready, or succeeded" Jan 29 20:15:00.558: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:15:00.558: INFO: Getting external IP address for bootstrap-e2e-minion-group-9w8s Jan 29 20:15:00.558: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-9w8s(35.233.143.195:22) Jan 29 20:15:00.558: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qdgj": Phase="Running", Reason="", readiness=true. Elapsed: 90.549241ms Jan 29 20:15:00.558: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qdgj" satisfied condition "running and ready, or succeeded" Jan 29 20:15:00.559: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 90.954223ms Jan 29 20:15:00.559: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:15:01.077: INFO: ssh prow@35.233.143.195:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 20:15:01.077: INFO: ssh prow@35.233.143.195:22: stdout: "" Jan 29 20:15:01.077: INFO: ssh prow@35.233.143.195:22: stderr: "" Jan 29 20:15:01.077: INFO: ssh prow@35.233.143.195:22: exit code: 0 Jan 29 20:15:01.077: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-9w8s condition Ready to be false Jan 29 20:15:01.120: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:02.747: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.279246688s Jan 29 20:15:02.747: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:02.771: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 2.303490466s Jan 29 20:15:02.771: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:15:03.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:04.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.129854298s Jan 29 20:15:04.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:04.602: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 4.134445831s Jan 29 20:15:04.602: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:15:05.221: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:06.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.129637613s Jan 29 20:15:06.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:06.602: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 6.134021229s Jan 29 20:15:06.602: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:15:07.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:08.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.131816416s Jan 29 20:15:08.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:08.604: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 8.136462788s Jan 29 20:15:08.604: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:15:09.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:10.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.129813604s Jan 29 20:15:10.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:10.602: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 10.134390181s Jan 29 20:15:10.602: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:15:11.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:12.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.130321322s Jan 29 20:15:12.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:12.601: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 12.133775367s Jan 29 20:15:12.602: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:15:13.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:14.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.130484278s Jan 29 20:15:14.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:14.602: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 14.134033613s Jan 29 20:15:14.602: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:15:15.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:16.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.129762348s Jan 29 20:15:16.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:16.601: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 16.133331031s Jan 29 20:15:16.601: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:15:17.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:18.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.13119668s Jan 29 20:15:18.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:18.603: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 18.135603762s Jan 29 20:15:18.603: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:15:19.613: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:20.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.129261904s Jan 29 20:15:20.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:20.602: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 20.134285048s Jan 29 20:15:20.602: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:15:21.657: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:22.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.129486911s Jan 29 20:15:22.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:22.602: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 22.134531939s Jan 29 20:15:22.602: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:15:23.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:24.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.13037082s Jan 29 20:15:24.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:24.603: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=false. Elapsed: 24.1352687s Jan 29 20:15:24.603: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-tq0k' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:42 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:15:25.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:26.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.129964199s Jan 29 20:15:26.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:26.602: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=true. Elapsed: 26.134431217s Jan 29 20:15:26.602: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k" satisfied condition "running and ready, or succeeded" Jan 29 20:15:26.602: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:15:26.602: INFO: Getting external IP address for bootstrap-e2e-minion-group-tq0k Jan 29 20:15:26.602: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-tq0k(34.105.126.211:22) Jan 29 20:15:27.125: INFO: ssh prow@34.105.126.211:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 20:15:27.125: INFO: ssh prow@34.105.126.211:22: stdout: "" Jan 29 20:15:27.125: INFO: ssh prow@34.105.126.211:22: stderr: "" Jan 29 20:15:27.125: INFO: ssh prow@34.105.126.211:22: exit code: 0 Jan 29 20:15:27.125: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-tq0k condition Ready to be false Jan 29 20:15:27.167: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:27.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:28.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.131188555s Jan 29 20:15:28.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:29.210: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:29.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:30.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.130301606s Jan 29 20:15:30.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:31.253: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:31.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:32.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.130603603s Jan 29 20:15:32.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:33.297: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:33.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:34.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.129381029s Jan 29 20:15:34.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:35.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:35.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:36.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.129687336s Jan 29 20:15:36.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:37.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:38.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:38.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.130994785s Jan 29 20:15:38.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:39.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:40.053: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:40.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.13018133s Jan 29 20:15:40.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:41.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:42.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:42.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.129971222s Jan 29 20:15:42.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:43.515: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:44.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:44.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.130566233s Jan 29 20:15:44.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:45.557: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:46.184: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-9w8s condition Ready to be true Jan 29 20:15:46.227: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:15:46.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.130136467s Jan 29 20:15:46.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:47.604: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:48.271: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:15:48.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.131821561s Jan 29 20:15:48.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:49.648: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:50.316: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:15:50.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.129777992s Jan 29 20:15:50.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:51.691: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:52.360: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:15:52.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.130570695s Jan 29 20:15:52.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:53.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:54.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:15:54.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.131352427s Jan 29 20:15:54.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:55.779: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:56.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:15:56.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.130317379s Jan 29 20:15:56.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:57.823: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:15:58.494: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:15:58.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.130735045s Jan 29 20:15:58.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:15:59.867: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:16:00.540: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:00.628: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.160544705s Jan 29 20:16:00.628: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:01.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:16:02.583: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:02.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.129957732s Jan 29 20:16:02.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:03.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:16:04.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.13076693s Jan 29 20:16:04.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:04.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:05.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:16:06.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.130544027s Jan 29 20:16:06.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:06.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:08.042: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:16:08.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.130938929s Jan 29 20:16:08.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:08.717: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:10.086: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:16:10.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.130035973s Jan 29 20:16:10.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:10.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:12.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:16:12.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.130364857s Jan 29 20:16:12.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:12.805: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:14.175: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:16:14.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.130049266s Jan 29 20:16:14.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:14.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:16.218: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-tq0k condition Ready to be true Jan 29 20:16:16.260: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:16:16.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.129630852s Jan 29 20:16:16.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:16.893: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:18.304: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:16:18.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.131974998s Jan 29 20:16:18.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:18.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:20.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:20.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.129556964s Jan 29 20:16:20.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:20.983: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:22.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:22.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.130720056s Jan 29 20:16:22.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:23.027: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:24.439: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:24.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.130972348s Jan 29 20:16:24.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:25.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:26.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:26.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.130240124s Jan 29 20:16:26.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:27.117: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:28.528: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:28.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.131050281s Jan 29 20:16:28.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:29.160: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:30.572: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:30.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.129985925s Jan 29 20:16:30.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:31.205: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:32.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.130010997s Jan 29 20:16:32.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:32.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:33.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-9w8s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:15:44 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:15:49 +0000 UTC}]. Failure Jan 29 20:16:34.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.129796231s Jan 29 20:16:34.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:34.661: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:35.293: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:16:35.293: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5nlck" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:16:35.293: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-9w8s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:16:35.339: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=false. Elapsed: 45.541799ms Jan 29 20:16:35.339: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=false. Elapsed: 45.589507ms Jan 29 20:16:35.339: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-9w8s' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:15:44 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:12:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:16:35.339: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-5nlck' on 'bootstrap-e2e-minion-group-9w8s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:15:44 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:16:34 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:51 +0000 UTC }] Jan 29 20:16:36.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.130308556s Jan 29 20:16:36.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:36.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:37.384: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s": Phase="Running", Reason="", readiness=true. Elapsed: 2.090840002s Jan 29 20:16:37.384: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-9w8s" satisfied condition "running and ready, or succeeded" Jan 29 20:16:37.384: INFO: Pod "metadata-proxy-v0.1-5nlck": Phase="Running", Reason="", readiness=true. Elapsed: 2.090893489s Jan 29 20:16:37.384: INFO: Pod "metadata-proxy-v0.1-5nlck" satisfied condition "running and ready, or succeeded" Jan 29 20:16:37.384: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-9w8s metadata-proxy-v0.1-5nlck] Jan 29 20:16:37.384: INFO: Reboot successful on node bootstrap-e2e-minion-group-9w8s Jan 29 20:16:38.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.130742749s Jan 29 20:16:38.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:38.752: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:40.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.130331728s Jan 29 20:16:40.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:40.795: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:42.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.130754186s Jan 29 20:16:42.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:42.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:44.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m44.129708471s Jan 29 20:16:44.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:44.883: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:46.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m46.129755297s Jan 29 20:16:46.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:46.927: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:48.606: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m48.138313385s Jan 29 20:16:48.606: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:48.972: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:50.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m50.130850139s Jan 29 20:16:50.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:51.017: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:52.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m52.129290305s Jan 29 20:16:52.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:53.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:54.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m54.129746186s Jan 29 20:16:54.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:55.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:56.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m56.129619817s Jan 29 20:16:56.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:57.148: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:16:14 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:16:58.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m58.131486877s Jan 29 20:16:58.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:16:59.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-tq0k is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:16:20 +0000 UTC}]. Failure Jan 29 20:17:00.601: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m0.133317587s Jan 29 20:17:00.601: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:01.255: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:17:01.255: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ggkjj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:17:01.255: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-tq0k" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 20:17:01.300: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k": Phase="Running", Reason="", readiness=true. Elapsed: 44.196615ms Jan 29 20:17:01.300: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-tq0k" satisfied condition "running and ready, or succeeded" Jan 29 20:17:01.300: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=false. Elapsed: 44.237362ms Jan 29 20:17:01.300: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ggkjj' on 'bootstrap-e2e-minion-group-tq0k' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:16:15 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:16:58 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:58:52 +0000 UTC }] Jan 29 20:17:02.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m2.130126747s Jan 29 20:17:02.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:03.344: INFO: Pod "metadata-proxy-v0.1-ggkjj": Phase="Running", Reason="", readiness=true. Elapsed: 2.088326227s Jan 29 20:17:03.344: INFO: Pod "metadata-proxy-v0.1-ggkjj" satisfied condition "running and ready, or succeeded" Jan 29 20:17:03.344: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-tq0k metadata-proxy-v0.1-ggkjj] Jan 29 20:17:03.344: INFO: Reboot successful on node bootstrap-e2e-minion-group-tq0k Jan 29 20:17:04.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m4.130282417s Jan 29 20:17:04.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:06.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m6.129761933s Jan 29 20:17:06.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:08.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m8.130965861s Jan 29 20:17:08.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:10.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m10.131473483s Jan 29 20:17:10.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:12.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m12.130264747s Jan 29 20:17:12.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:14.620: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m14.152122236s Jan 29 20:17:14.620: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:16.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m16.130753569s Jan 29 20:17:16.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:18.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m18.131280312s Jan 29 20:17:18.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:20.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m20.129513016s Jan 29 20:17:20.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:22.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m22.129341452s Jan 29 20:17:22.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:24.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m24.129501615s Jan 29 20:17:24.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:26.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m26.129522835s Jan 29 20:17:26.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:28.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m28.131211249s Jan 29 20:17:28.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:30.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m30.130388831s Jan 29 20:17:30.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:32.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m32.130064723s Jan 29 20:17:32.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:34.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m34.131960173s Jan 29 20:17:34.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:36.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m36.131148183s Jan 29 20:17:36.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:38.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m38.130957187s Jan 29 20:17:38.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:40.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m40.130535863s Jan 29 20:17:40.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:42.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m42.130279189s Jan 29 20:17:42.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:44.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m44.129295245s Jan 29 20:17:44.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:46.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m46.131728126s Jan 29 20:17:46.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:48.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m48.131453458s Jan 29 20:17:48.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:50.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m50.130123171s Jan 29 20:17:50.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:52.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m52.130589307s Jan 29 20:17:52.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:54.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m54.12969895s Jan 29 20:17:54.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:56.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m56.129852679s Jan 29 20:17:56.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:17:58.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2m58.131732162s Jan 29 20:17:58.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:00.617: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m0.149479231s Jan 29 20:18:00.617: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:02.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m2.130422972s Jan 29 20:18:02.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:04.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m4.13009631s Jan 29 20:18:04.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:06.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m6.129550114s Jan 29 20:18:06.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:08.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m8.131118453s Jan 29 20:18:08.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:10.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m10.129455806s Jan 29 20:18:10.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:12.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m12.130245424s Jan 29 20:18:12.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:14.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m14.130853827s Jan 29 20:18:14.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:16.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m16.12946339s Jan 29 20:18:16.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:18.600: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m18.132513559s Jan 29 20:18:18.600: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:20.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m20.131067298s Jan 29 20:18:20.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:22.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m22.131458001s Jan 29 20:18:22.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:24.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m24.13029511s Jan 29 20:18:24.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:26.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m26.129685573s Jan 29 20:18:26.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:28.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m28.132104489s Jan 29 20:18:28.600: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:30.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m30.129711417s Jan 29 20:18:30.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:32.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m32.130582735s Jan 29 20:18:32.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:34.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m34.130387461s Jan 29 20:18:34.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:36.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m36.129688212s Jan 29 20:18:36.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:38.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m38.130923078s Jan 29 20:18:38.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:40.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m40.13045685s Jan 29 20:18:40.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:42.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m42.129777014s Jan 29 20:18:42.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:44.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m44.130372953s Jan 29 20:18:44.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:46.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m46.130631526s Jan 29 20:18:46.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:48.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m48.130966162s Jan 29 20:18:48.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:50.623: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m50.155680972s Jan 29 20:18:50.623: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:52.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m52.129263225s Jan 29 20:18:52.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:54.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m54.130528011s Jan 29 20:18:54.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:56.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m56.131360156s Jan 29 20:18:56.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:18:58.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 3m58.131354978s Jan 29 20:18:58.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:00.704: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m0.236813168s Jan 29 20:19:00.704: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:02.602: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m2.134532874s Jan 29 20:19:02.602: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:04.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m4.129975134s Jan 29 20:19:04.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:06.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m6.129433818s Jan 29 20:19:06.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:08.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m8.131074684s Jan 29 20:19:08.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:10.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m10.130240373s Jan 29 20:19:10.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:12.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m12.130395077s Jan 29 20:19:12.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:14.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m14.129544857s Jan 29 20:19:14.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:16.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m16.130584832s Jan 29 20:19:16.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:18.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m18.131114752s Jan 29 20:19:18.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:20.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m20.129867318s Jan 29 20:19:20.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:22.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m22.130581097s Jan 29 20:19:22.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:24.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m24.130325157s Jan 29 20:19:24.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:26.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m26.129623942s Jan 29 20:19:26.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:28.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m28.131031028s Jan 29 20:19:28.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:30.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m30.129467834s Jan 29 20:19:30.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:32.600: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m32.132646184s Jan 29 20:19:32.600: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:34.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m34.130217313s Jan 29 20:19:34.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:36.626: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m36.158890713s Jan 29 20:19:36.626: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:38.600: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m38.132290088s Jan 29 20:19:38.600: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:40.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m40.129173446s Jan 29 20:19:40.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:42.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m42.13012726s Jan 29 20:19:42.598: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:44.597: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m44.129830598s Jan 29 20:19:44.597: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:46.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m46.131236674s Jan 29 20:19:46.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:48.599: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4m48.131259776s Jan 29 20:19:48.599: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-qdgj' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 20:14:35 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 19:59:08 +0000 UTC }] Jan 29 20:19:50.598: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 4m50.130321098s Jan 29 20:19:50.598: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 20:19:50.598: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-msh27 kube-proxy-bootstrap-e2e-minion-group-qdgj metadata-proxy-v0.1-jcl2g volume-snapshot-controller-0] Jan 29 20:19:50.598: INFO: Getting external IP address for bootstrap-e2e-minion-group-qdgj Jan 29 20:19:50.598: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-qdgj(35.197.112.91:22) Jan 29 20:19:51.117: INFO: ssh prow@35.197.112.91:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 20:19:51.117: INFO: ssh prow@35.197.112.91:22: stdout: "" Jan 29 20:19:51.117: INFO: ssh prow@35.197.112.91:22: stderr: "" Jan 29 20:19:51.117: INFO: ssh prow@35.197.112.91:22: exit code: 0 Jan 29 20:19:51.117: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-qdgj condition Ready to be false Jan 29 20:19:51.160: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:19:53.204: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:19:55.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:19:57.295: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:19:59.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 6m28.271s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 5m0.001s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x0, 0x1bf08eb000) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeNotReady(...) test/e2e/framework/node/wait.go:138 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:296 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:20:01.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:03.430: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:05.474: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:07.518: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:09.562: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:11.605: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:13.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:15.693: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:17.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:19.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 6m48.273s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 5m20.003s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 5 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x0, 0x1bf08eb000) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeNotReady(...) test/e2e/framework/node/wait.go:138 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:296 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:20:21.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:23.872: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:25.916: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:27.960: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:30.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:32.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:34.130: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:36.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:38.219: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 7m8.274s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 5m40.004s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x0, 0x1bf08eb000) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeNotReady(...) test/e2e/framework/node/wait.go:138 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:296 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:20:40.264: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 20:20:42.308: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-qdgj condition Ready to be true Jan 29 20:20:42.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:20:44.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 20:20:46.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:20:48.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:20:50.532: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:20:52.576: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:20:54.627: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:20:56.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:20:58.716: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 7m28.275s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 6m0.006s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:21:00.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:02.803: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:04.848: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:06.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:08.934: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:10.979: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:13.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:15.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:17.111: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:19.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 7m48.277s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 6m20.007s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 6 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:21:21.199: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:23.244: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 20:20:40 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:25.294: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:27.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:29.382: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:31.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:33.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:35.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:37.568: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:39.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 8m8.278s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 6m40.009s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 7 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:21:41.656: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:43.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:45.745: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:47.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:49.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:51.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:53.970: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:56.014: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:21:58.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:00.102: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 8m28.28s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 7m0.01s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 7 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:22:02.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:04.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:06.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:08.279: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:10.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:12.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:14.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:16.457: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:18.502: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 8m48.282s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 7m20.012s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 7 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:22:20.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:22.589: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:24.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:26.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:28.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:30.770: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:32.815: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:34.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:36.903: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:38.947: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 9m8.283s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 7m40.014s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 8 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:22:40.990: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:43.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:45.081: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:47.125: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:49.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:51.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:53.255: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:55.299: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:57.343: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:22:59.387: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 9m28.285s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 8m0.015s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 8 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:23:01.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:03.505: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:05.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:07.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:09.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:11.679: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:13.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:15.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:17.811: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:19.855: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 9m48.286s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 8m20.016s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 8 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:23:21.899: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:23.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:25.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:28.033: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:30.077: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:32.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:34.166: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:36.210: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:38.254: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 10m8.289s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 8m40.019s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 9 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:23:40.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:42.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:44.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:46.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:48.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:50.527: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:52.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:54.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:56.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:23:58.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 10m28.29s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 9m0.021s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 9 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:24:00.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:24:02.794: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:24:04.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:24:06.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:24:08.931: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:10.972: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:13.012: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:15.053: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:17.093: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:19.133: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 10m48.292s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 9m20.022s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 9 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:24:21.172: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:23.212: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:25.253: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:27.299: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:29.341: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:31.384: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:33.432: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:35.476: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:37.520: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:39.559: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 11m8.297s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 9m40.027s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 10 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:24:41.602: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:43.643: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:45.685: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:47.728: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:49.772: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Jan 29 20:24:51.816: INFO: Couldn't get node bootstrap-e2e-minion-group-qdgj Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 11m28.298s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 10m0.029s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 10 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [select] k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).RoundTrip(0xc00214c180, 0xc001cf6700) vendor/golang.org/x/net/http2/transport.go:1273 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTripOpt(0xc0024207e0, 0xc001cf6700, {0xb0?}) vendor/golang.org/x/net/http2/transport.go:565 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).RoundTrip(0xc0024a2000?, 0xc003584900?) vendor/golang.org/x/net/http2/transport.go:517 net/http.(*Transport).roundTrip(0xc0024a2000, 0xc001cf6700) /usr/local/go/src/net/http/transport.go:593 net/http.(*Transport).RoundTrip(0x70de840?, 0xc003408b10?) /usr/local/go/src/net/http/roundtrip.go:17 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*bearerAuthRoundTripper).RoundTrip(0xc003234060, 0xc001cf6600) vendor/k8s.io/client-go/transport/round_trippers.go:317 k8s.io/kubernetes/vendor/k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc002429600, 0xc001cf6500) vendor/k8s.io/client-go/transport/round_trippers.go:168 net/http.send(0xc001cf6500, {0x80d5d80, 0xc002429600}, {0x75d65c0?, 0x2675501?, 0x0?}) /usr/local/go/src/net/http/client.go:251 net/http.(*Client).send(0xc003234090, 0xc001cf6500, {0x0?, 0x8?, 0x0?}) /usr/local/go/src/net/http/client.go:175 net/http.(*Client).do(0xc003234090, 0xc001cf6500) /usr/local/go/src/net/http/client.go:715 net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:581 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).request(0xc0059d8a20, {0x7ff51c580d20, 0xc0023ac880}, 0x0?) vendor/k8s.io/client-go/rest/request.go:981 k8s.io/kubernetes/vendor/k8s.io/client-go/rest.(*Request).Do(0xc0059d8a20, {0x7ff51c580d20, 0xc0023ac880}) vendor/k8s.io/client-go/rest/request.go:1022 k8s.io/kubernetes/vendor/k8s.io/client-go/kubernetes/typed/core/v1.(*nodes).Get(0xc000c09fa0, {0x7ff51c580d20, 0xc0023ac880}, {0xc004abc300, 0x1f}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}}) vendor/k8s.io/client-go/kubernetes/typed/core/v1/node.go:77 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:120 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:25:00.626: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:02.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:04.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:06.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:08.804: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:10.848: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:12.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:14.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:16.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:19.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 11m48.301s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 10m20.031s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 10 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:25:21.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:23.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:25.281: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:27.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:29.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:31.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:33.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:35.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:37.568: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:39.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Automatically polling progress: [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart (Spec Runtime: 12m8.303s) test/e2e/cloud/gcp/reboot.go:109 In [It] (Node Runtime: 10m40.033s) test/e2e/cloud/gcp/reboot.go:109 Spec Goroutine goroutine 6071 [semacquire, 11 minutes] sync.runtime_Semacquire(0xc00101a990?) /usr/local/go/src/runtime/sema.go:62 sync.(*WaitGroup).Wait(0x7ff51c580d20?) /usr/local/go/src/sync/waitgroup.go:139 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot({0x7ff51c580d20?, 0xc0023ac880}, {0x8147108?, 0xc0033bb520}, {0x78b37be, 0x7d}, 0x0) test/e2e/cloud/gcp/reboot.go:181 > k8s.io/kubernetes/test/e2e/cloud/gcp.glob..func8.5({0x7ff51c580d20?, 0xc0023ac880?}) test/e2e/cloud/gcp/reboot.go:112 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func2({0x8111ee8?, 0xc0023ac880}) vendor/github.com/onsi/ginkgo/v2/internal/node.go:452 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3() vendor/github.com/onsi/ginkgo/v2/internal/suite.go:854 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/v2/internal.(*Suite).runNode vendor/github.com/onsi/ginkgo/v2/internal/suite.go:841 Goroutines of Interest goroutine 6055 [sleep] time.Sleep(0x77359400) /usr/local/go/src/runtime/time.go:195 k8s.io/kubernetes/test/e2e/framework/node.WaitConditionToBe({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0xc004abc300, 0x1f}, {0x76bb977, 0x5}, 0x1, 0x45d964b800) test/e2e/framework/node/wait.go:119 k8s.io/kubernetes/test/e2e/framework/node.WaitForNodeToBeReady(...) test/e2e/framework/node/wait.go:143 > k8s.io/kubernetes/test/e2e/cloud/gcp.rebootNode({0x7ff51c580d20, 0xc0023ac880}, {0x8147108, 0xc0033bb520}, {0x7ffd764d2600, 0x3}, {0xc004abc300, 0x1f}, {0x78b37be, 0x7d}) test/e2e/cloud/gcp/reboot.go:301 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot.func2(0x1) test/e2e/cloud/gcp/reboot.go:173 > k8s.io/kubernetes/test/e2e/cloud/gcp.testReboot test/e2e/cloud/gcp/reboot.go:169 Jan 29 20:25:41.657: INFO: Condition Ready of node bootstrap-e2e-minion-group-qdgj is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 20:20:45 +0000 UTC}]. Failure Jan 29 20:25:43.657: INFO: Node bootstrap-e2e-minion-group-qdgj didn't reach desired Ready condition status (true) within 5m0s Jan 29 20:25:43.658: INFO: Node bootstrap-e2e-minion-group-qdgj failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 20:25:43.658 < Exit [It] each node by triggering kernel panic and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:109 @ 01/29/23 20:25:43.658 (10m43.557s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 20:25:43.658 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 20:25:43.658 Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-7226v to bootstrap-e2e-minion-group-tq0k Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.041440764s (1.04151063s including waiting) Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-7226v_kube-system(50c9e097-5b0f-4df8-906b-d031ff7e5d85) Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": dial tcp 10.64.2.6:8181: connect: connection refused Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-7226v Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-7226v Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-7226v_kube-system(50c9e097-5b0f-4df8-906b-d031ff7e5d85) Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-7226v: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Readiness probe failed: Get "http://10.64.2.16:8181/ready": dial tcp 10.64.2.16:8181: connect: connection refused Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-dfbff to bootstrap-e2e-minion-group-qdgj Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.662889684s (2.662899979s including waiting) Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "http://10.64.3.5:8181/ready": dial tcp 10.64.3.5:8181: connect: connection refused Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-dfbff Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "http://10.64.3.15:8181/ready": dial tcp 10.64.3.15:8181: connect: connection refused Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-dfbff_kube-system(56a8d266-9fa7-4aaf-b9dd-ddc06dee7b8e) Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "http://10.64.3.22:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-dfbff Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container coredns Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-dfbff_kube-system(56a8d266-9fa7-4aaf-b9dd-ddc06dee7b8e) Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f-dfbff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-dfbff Jan 29 20:25:43.757: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-7226v Jan 29 20:25:43.757: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 20:25:43.757: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 20:25:43.757: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 20:25:43.757: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 20:25:43.757: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 20:25:43.757: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 29 20:25:43.757: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.757: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 20:25:43.757: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 20:25:43.757: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 20:25:43.757: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 20:25:43.757: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 20:25:43.757: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.757: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 20:25:43.757: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 20:25:43.757: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_90a62 became leader Jan 29 20:25:43.757: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_56f11 became leader Jan 29 20:25:43.757: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_8ba66 became leader Jan 29 20:25:43.757: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ab7d9 became leader Jan 29 20:25:43.757: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_fbf7 became leader Jan 29 20:25:43.757: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_c6e94 became leader Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-4nk68 to bootstrap-e2e-minion-group-tq0k Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 668.677173ms (668.692909ms including waiting) Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8093/healthz": dial tcp 10.64.2.5:8093: connect: network is unreachable Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-4nk68_kube-system(9618808b-f13f-4c68-85f0-0604438645d3) Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Unhealthy: Liveness probe failed: Get "http://10.64.2.9:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-4nk68: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-4nk68_kube-system(9618808b-f13f-4c68-85f0-0604438645d3) Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-cd6h5 to bootstrap-e2e-minion-group-9w8s Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 663.587649ms (663.598454ms including waiting) Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-cd6h5_kube-system(7ee8917e-685a-4438-ae1f-31d3475142e7) Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-cd6h5: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-cd6h5_kube-system(7ee8917e-685a-4438-ae1f-31d3475142e7) Jan 29 20:25:43.757: INFO: event for konnectivity-agent-wh8g5: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-wh8g5 to bootstrap-e2e-minion-group-qdgj Jan 29 20:25:43.757: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 20:25:43.757: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.480831038s (1.480840227s including waiting) Jan 29 20:25:43.757: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container konnectivity-agent Jan 29 20:25:43.757: INFO: event for konnectivity-agent-wh8g5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.757: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.757: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 20:25:43.758: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container konnectivity-agent Jan 29 20:25:43.758: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container konnectivity-agent Jan 29 20:25:43.758: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container konnectivity-agent Jan 29 20:25:43.758: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-wh8g5_kube-system(7a8f5ba8-53f9-4149-b38f-7c10aa331632) Jan 29 20:25:43.758: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Liveness probe failed: Get "http://10.64.3.20:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:25:43.758: INFO: event for konnectivity-agent-wh8g5: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 20:25:43.758: INFO: event for konnectivity-agent-wh8g5: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-wh8g5 Jan 29 20:25:43.758: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-cd6h5 Jan 29 20:25:43.758: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-4nk68 Jan 29 20:25:43.758: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 20:25:43.758: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 20:25:43.758: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 20:25:43.758: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 20:25:43.758: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 20:25:43.758: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 20:25:43.758: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 20:25:43.758: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 20:25:43.758: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 20:25:43.758: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 20:25:43.758: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 20:25:43.758: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 20:25:43.758: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 20:25:43.758: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 20:25:43.758: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:25:43.758: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 20:25:43.758: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 20:25:43.758: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 20:25:43.758: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 20:25:43.758: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_5cb0b339-27fa-478a-a12b-f3e084d9ff7a became leader Jan 29 20:25:43.758: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_896d20be-ed11-4ad6-ba6f-aeff112d6cdf became leader Jan 29 20:25:43.758: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_63955417-795f-4dea-b69f-0c2330df6065 became leader Jan 29 20:25:43.758: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ae362da5-d8bb-4dee-8d15-074d6258d290 became leader Jan 29 20:25:43.758: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c3dc2798-e744-4921-97b3-0c8487d60e65 became leader Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/network-unavailable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-msh27 to bootstrap-e2e-minion-group-qdgj Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.772576572s (2.77258737s including waiting) Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container autoscaler Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container autoscaler Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-msh27 Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container autoscaler Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container autoscaler Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container autoscaler Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-msh27_kube-system(c36a8737-0bbd-47ac-8331-9bb067fda14a) Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-msh27 Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container autoscaler Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container autoscaler Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container autoscaler Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985-msh27: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-msh27_kube-system(c36a8737-0bbd-47ac-8331-9bb067fda14a) Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-msh27 Jan 29 20:25:43.758: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9w8s_kube-system(41c8500189f52bcbb0d902b75d8c693f) Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9w8s_kube-system(41c8500189f52bcbb0d902b75d8c693f) Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-9w8s: {kubelet bootstrap-e2e-minion-group-9w8s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-9w8s_kube-system(41c8500189f52bcbb0d902b75d8c693f) Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qdgj_kube-system(d91ce8a7f13c5fdfeaaa986d0982d773) Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qdgj_kube-system(d91ce8a7f13c5fdfeaaa986d0982d773) Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qdgj: {kubelet bootstrap-e2e-minion-group-qdgj} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qdgj_kube-system(d91ce8a7f13c5fdfeaaa986d0982d773) Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-tq0k_kube-system(0bf55a39319402a64119797ff480665f) Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-tq0k_kube-system(0bf55a39319402a64119797ff480665f) Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} Killing: Stopping container kube-proxy Jan 29 20:25:43.758: INFO: event for kube-proxy-bootstrap-e2e-minion-group-tq0k: {kubelet bootstrap-e2e-minion-group-tq0k} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-tq0k_kube-system(0bf55a39319402a64119797ff480665f) Jan 29 20:25:43.758: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 20:25:43.758: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 20:25:43.758: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 20:25:43.758: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 20:25:43.758: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 20:25:43.758: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_908ace71-8fd9-4871-8936-5aab7c5cfed3 became leader Jan 29 20:25:43.758: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_978a61b3-7079-42ae-9f59-cf7b479348e3 became leader Jan 29 20:25:43.758: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_34f033d2-db64-4ab7-af5f-d35e0c069db5 became leader Jan 29 20:25:43.758: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_33ec300a-9e85-4dd8-be41-6a41765bbb91 became leader Jan 29 20:25:43.758: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ec577213-5b4f-40af-9d4a-7d6d74c43090 became leader Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-wxpff to bootstrap-e2e-minion-group-qdgj Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.193873035s (1.193895245s including waiting) Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container default-http-backend Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container default-http-backend Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-wxpff Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container default-http-backend Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container default-http-backend Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Liveness probe failed: Get "http://10.64.3.14:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-wxpff Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container default-http-backend Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99-wxpff: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container default-http-backend Jan 29 20:25:43.758: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-wxpff Jan 29 20:25:43.758: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 20:25:43.758: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 20:25:43.758: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 20:25:43.758: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 20:25:43.758: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 20:25:43.758: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 20:25:43.758: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5nlck to bootstrap-e2e-minion-group-9w8s Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 793.012358ms (793.412637ms including waiting) Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.021376731s (2.021411909s including waiting) Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-5nlck: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-g8pvk: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-g8pvk to bootstrap-e2e-master Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 834.33593ms (834.358292ms including waiting) Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.93646676s (1.936479152s including waiting) Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-g8pvk: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-ggkjj to bootstrap-e2e-minion-group-tq0k Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 815.429367ms (815.447239ms including waiting) Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.922350868s (1.922366582s including waiting) Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Created: Created container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-ggkjj: {kubelet bootstrap-e2e-minion-group-tq0k} Started: Started container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-jcl2g to bootstrap-e2e-minion-group-qdgj Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 771.095227ms (771.126697ms including waiting) Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.922287469s (1.92232627s including waiting) Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metadata-proxy Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1-jcl2g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container prometheus-to-sd-exporter Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5nlck Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-g8pvk Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-ggkjj Jan 29 20:25:43.758: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-jcl2g Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-4pd7g to bootstrap-e2e-minion-group-qdgj Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.362079376s (3.362094624s including waiting) Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metrics-server Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metrics-server Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.97442928s (2.974455307s including waiting) Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Created: Created container metrics-server-nanny Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Started: Started container metrics-server-nanny Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container metrics-server Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Killing: Stopping container metrics-server-nanny Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Readiness probe failed: Get "https://10.64.3.9:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c-4pd7g: {kubelet bootstrap-e2e-minion-group-qdgj} Unhealthy: Liveness probe failed: Get "https://10.64.3.9:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-4pd7g Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-4pd7g Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-t82lt to bootstrap-e2e-minion-group-9w8s Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.394960598s (1.395000082s including waiting) Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.053276162s (1.053291079s including waiting) Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Created: Created container metrics-server-nanny Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Started: Started container metrics-server-nanny Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": dial tcp 10.64.0.3:10250: connect: connection refused Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": dial tcp 10.64.0.3:10250: connect: connection refused Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Liveness probe failed: Get "https://10.64.0.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Killing: Stopping container metrics-server-nanny Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Unhealthy: Readiness probe failed: Get "https://10.64.0.4:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {node-controller } NodeNotReady: Node is not ready Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 20:25:43.758: INFO: event for metrics-server-v0.5.2-867b8754b9-t82lt: {kubelet bootstrap-e2e-minion-group-9w8s} Creat