go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 02:00:30.499from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 01:58:10.983 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 01:58:10.983 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 01:58:10.983 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 01:58:10.983 Jan 29 01:58:10.983: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 01:58:10.984 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 01:58:11.167 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 01:58:11.247 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 01:58:11.327 (344ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 01:58:11.327 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 01:58:11.327 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 01:58:11.327 Jan 29 01:58:11.423: INFO: Getting bootstrap-e2e-minion-group-s51h Jan 29 01:58:11.465: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-s51h condition Ready to be true Jan 29 01:58:11.474: INFO: Getting bootstrap-e2e-minion-group-6w15 Jan 29 01:58:11.474: INFO: Getting bootstrap-e2e-minion-group-7c3d Jan 29 01:58:11.507: INFO: Node bootstrap-e2e-minion-group-s51h has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-s51h metadata-proxy-v0.1-bff8h] Jan 29 01:58:11.507: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-s51h metadata-proxy-v0.1-bff8h] Jan 29 01:58:11.507: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bff8h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.507: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-s51h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.516: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7c3d condition Ready to be true Jan 29 01:58:11.517: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6w15 condition Ready to be true Jan 29 01:58:11.550: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s51h": Phase="Running", Reason="", readiness=true. Elapsed: 42.908961ms Jan 29 01:58:11.550: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s51h" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.550: INFO: Pod "metadata-proxy-v0.1-bff8h": Phase="Running", Reason="", readiness=true. Elapsed: 43.221172ms Jan 29 01:58:11.550: INFO: Pod "metadata-proxy-v0.1-bff8h" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.550: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-s51h metadata-proxy-v0.1-bff8h] Jan 29 01:58:11.550: INFO: Getting external IP address for bootstrap-e2e-minion-group-s51h Jan 29 01:58:11.550: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-s51h(34.145.127.28:22) Jan 29 01:58:11.559: INFO: Node bootstrap-e2e-minion-group-6w15 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9 volume-snapshot-controller-0] Jan 29 01:58:11.559: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9 volume-snapshot-controller-0] Jan 29 01:58:11.559: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.559: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-fths2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.559: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6w15" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.560: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bv2w9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.560: INFO: Node bootstrap-e2e-minion-group-7c3d has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-7c3d metadata-proxy-v0.1-pn2qm] Jan 29 01:58:11.560: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-7c3d metadata-proxy-v0.1-pn2qm] Jan 29 01:58:11.560: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-pn2qm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.560: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7c3d" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.605: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 45.528708ms Jan 29 01:58:11.605: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.606: INFO: Pod "kube-dns-autoscaler-5f6455f985-fths2": Phase="Running", Reason="", readiness=true. Elapsed: 46.099408ms Jan 29 01:58:11.606: INFO: Pod "kube-dns-autoscaler-5f6455f985-fths2" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.606: INFO: Pod "metadata-proxy-v0.1-bv2w9": Phase="Running", Reason="", readiness=true. Elapsed: 46.138258ms Jan 29 01:58:11.606: INFO: Pod "metadata-proxy-v0.1-bv2w9" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.606: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6w15": Phase="Running", Reason="", readiness=true. Elapsed: 46.273849ms Jan 29 01:58:11.606: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6w15" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.606: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9 volume-snapshot-controller-0] Jan 29 01:58:11.606: INFO: Getting external IP address for bootstrap-e2e-minion-group-6w15 Jan 29 01:58:11.606: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-6w15(35.233.188.19:22) Jan 29 01:58:11.606: INFO: Pod "metadata-proxy-v0.1-pn2qm": Phase="Running", Reason="", readiness=true. Elapsed: 46.753713ms Jan 29 01:58:11.606: INFO: Pod "metadata-proxy-v0.1-pn2qm" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.607: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d": Phase="Running", Reason="", readiness=true. Elapsed: 46.877658ms Jan 29 01:58:11.607: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.607: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-7c3d metadata-proxy-v0.1-pn2qm] Jan 29 01:58:11.607: INFO: Getting external IP address for bootstrap-e2e-minion-group-7c3d Jan 29 01:58:11.607: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-7c3d(35.247.28.1:22) Jan 29 01:58:12.070: INFO: ssh prow@34.145.127.28:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 01:58:12.070: INFO: ssh prow@34.145.127.28:22: stdout: "" Jan 29 01:58:12.070: INFO: ssh prow@34.145.127.28:22: stderr: "" Jan 29 01:58:12.070: INFO: ssh prow@34.145.127.28:22: exit code: 0 Jan 29 01:58:12.070: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-s51h condition Ready to be false Jan 29 01:58:12.112: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:12.125: INFO: ssh prow@35.247.28.1:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 01:58:12.125: INFO: ssh prow@35.247.28.1:22: stdout: "" Jan 29 01:58:12.125: INFO: ssh prow@35.247.28.1:22: stderr: "" Jan 29 01:58:12.125: INFO: ssh prow@35.247.28.1:22: exit code: 0 Jan 29 01:58:12.125: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-7c3d condition Ready to be false Jan 29 01:58:12.125: INFO: ssh prow@35.233.188.19:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 01:58:12.125: INFO: ssh prow@35.233.188.19:22: stdout: "" Jan 29 01:58:12.125: INFO: ssh prow@35.233.188.19:22: stderr: "" Jan 29 01:58:12.125: INFO: ssh prow@35.233.188.19:22: exit code: 0 Jan 29 01:58:12.125: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6w15 condition Ready to be false Jan 29 01:58:12.169: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:12.169: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:14.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:14.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:14.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:16.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:16.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:16.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:18.241: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:18.302: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:18.302: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:20.284: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:20.346: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:20.346: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:22.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:22.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:22.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:24.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:24.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:24.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:26.414: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:26.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:26.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:28.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:28.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:28.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:30.499: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:30.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:30.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:32.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:32.610: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:32.610: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:34.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:34.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:34.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:36.627: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:36.695: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:36.695: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:38.667: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:38.734: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:38.735: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:40.708: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:40.775: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:40.775: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:42.748: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:42.814: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:42.814: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:44.787: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:44.854: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:44.854: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:46.827: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:46.894: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:46.894: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:48.868: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:48.934: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:48.934: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:50.907: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:50.974: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:50.974: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:52.947: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:53.014: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:53.014: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:54.987: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:55.054: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:55.054: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:57.026: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:57.094: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:57.094: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:59.066: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:59.134: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:59.134: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:59:01.106: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:59:01.173: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:59:01.173: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:59:03.145: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:59:03.213: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:59:03.213: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:59:05.185: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:59:05.253: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:59:05.253: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:59:13.985: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:13.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:14.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:16.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:16.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:16.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:18.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:18.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:18.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:20.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:20.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:20.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:22.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:22.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:22.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:24.347: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:24.347: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:24.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:26.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:26.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:26.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:28.443: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:28.443: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:28.443: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:30.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:30.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:30.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:32.539: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:32.539: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:32.539: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:34.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:34.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:34.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:36.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:36.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:36.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:38.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:38.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:38.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:40.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:40.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:40.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:42.780: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:42.780: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:42.780: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:44.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:44.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:44.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:46.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:46.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:46.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:48.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:48.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:48.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:50.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:50.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:50.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:53.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:53.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:53.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:55.068: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:55.068: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:55.068: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:57.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:57.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:57.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:59.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:59.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:59.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:01.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:01.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:01.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:03.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:03.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:03.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:05.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:05.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:05.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:07.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:07.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:07.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:09.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:09.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:09.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:11.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:11.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:11.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:13.454: INFO: Node bootstrap-e2e-minion-group-7c3d didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:00:13.454: INFO: Node bootstrap-e2e-minion-group-s51h didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:00:13.454: INFO: Node bootstrap-e2e-minion-group-6w15 didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:00:13.454: INFO: Node bootstrap-e2e-minion-group-6w15 failed reboot test. Jan 29 02:00:13.454: INFO: Node bootstrap-e2e-minion-group-7c3d failed reboot test. Jan 29 02:00:13.454: INFO: Node bootstrap-e2e-minion-group-s51h failed reboot test. Jan 29 02:00:13.455: INFO: Executing termination hook on nodes Jan 29 02:00:13.455: INFO: Getting external IP address for bootstrap-e2e-minion-group-6w15 Jan 29 02:00:13.455: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-6w15(35.233.188.19:22) Jan 29 02:00:29.456: INFO: ssh prow@35.233.188.19:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 02:00:29.456: INFO: ssh prow@35.233.188.19:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 01:58:22 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 02:00:29.456: INFO: ssh prow@35.233.188.19:22: stderr: "" Jan 29 02:00:29.456: INFO: ssh prow@35.233.188.19:22: exit code: 0 Jan 29 02:00:29.456: INFO: Getting external IP address for bootstrap-e2e-minion-group-7c3d Jan 29 02:00:29.456: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-7c3d(35.247.28.1:22) Jan 29 02:00:29.974: INFO: ssh prow@35.247.28.1:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 02:00:29.974: INFO: ssh prow@35.247.28.1:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 01:58:22 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 02:00:29.974: INFO: ssh prow@35.247.28.1:22: stderr: "" Jan 29 02:00:29.974: INFO: ssh prow@35.247.28.1:22: exit code: 0 Jan 29 02:00:29.974: INFO: Getting external IP address for bootstrap-e2e-minion-group-s51h Jan 29 02:00:29.974: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-s51h(34.145.127.28:22) Jan 29 02:00:30.498: INFO: ssh prow@34.145.127.28:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 02:00:30.498: INFO: ssh prow@34.145.127.28:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 01:58:22 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 02:00:30.498: INFO: ssh prow@34.145.127.28:22: stderr: "" Jan 29 02:00:30.498: INFO: ssh prow@34.145.127.28:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 02:00:30.499 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 02:00:30.499 (2m19.171s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:00:30.499 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 02:00:30.499 Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-2nvv4 to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.229909205s (4.229917066s including waiting) Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container coredns Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container coredns Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container coredns Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-2nvv4_kube-system(c5a7c76e-33f7-4271-a7f7-8f4b6013857d) Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-sch2n to bootstrap-e2e-minion-group-s51h Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 968.405842ms (968.417139ms including waiting) Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container coredns Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container coredns Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container coredns Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-2nvv4 Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-sch2n Jan 29 02:00:30.548: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 02:00:30.548: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 02:00:30.548: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d580 became leader Jan 29 02:00:30.548: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b84f3 became leader Jan 29 02:00:30.548: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1f6a8 became leader Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-krs9s to bootstrap-e2e-minion-group-s51h Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 589.41049ms (589.437215ms including waiting) Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container konnectivity-agent Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container konnectivity-agent Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:00:30.548: INFO: event for konnectivity-agent-rw7fw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rw7fw to bootstrap-e2e-minion-group-7c3d Jan 29 02:00:30.548: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:00:30.548: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 627.397814ms (627.417417ms including waiting) Jan 29 02:00:30.548: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:00:30.548: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:00:30.548: INFO: event for konnectivity-agent-x4gbp: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-x4gbp to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.548: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:00:30.548: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.54378487s (2.543795192s including waiting) Jan 29 02:00:30.548: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container konnectivity-agent Jan 29 02:00:30.548: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container konnectivity-agent Jan 29 02:00:30.549: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container konnectivity-agent Jan 29 02:00:30.549: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:00:30.549: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-x4gbp_kube-system(5cc4536d-8554-405a-ac44-b9cd0b3e7168) Jan 29 02:00:30.549: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.549: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rw7fw Jan 29 02:00:30.549: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-x4gbp Jan 29 02:00:30.549: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-krs9s Jan 29 02:00:30.549: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 02:00:30.549: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:00:30.549: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:00:30.549: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:00:30.549: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 02:00:30.549: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 02:00:30.549: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 02:00:30.549: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8c88f9f3-0fcf-4820-9f5f-5ee5c968f50d became leader Jan 29 02:00:30.549: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_e5ddf3f0-26c9-4d3b-ba00-8f32b5849ba5 became leader Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-fths2 to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 4.452281102s (4.452289884s including waiting) Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container autoscaler Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container autoscaler Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container autoscaler Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fths2_kube-system(29242a59-ceae-4689-899f-a4b3bcf58fbe) Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-fths2 Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6w15_kube-system(04a1e6edd54c1866478f181a6bf60b38) Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-s51h_kube-system(2451b12f9e04e1f8e16fde66c2622fcd) Jan 29 02:00:30.549: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:00:30.549: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 02:00:30.549: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 02:00:30.549: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 02:00:30.549: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 02:00:30.549: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_a9b313b0-f9fa-43de-b979-0958c05e1287 became leader Jan 29 02:00:30.549: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ecac3899-f709-4f43-824f-37faa839889c became leader Jan 29 02:00:30.549: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_460317a8-6d35-4656-87b9-0d8d3533477a became leader Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-9bf57 to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 573.484189ms (573.492084ms including waiting) Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container default-http-backend Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container default-http-backend Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-9bf57 Jan 29 02:00:30.549: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 02:00:30.549: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 02:00:30.549: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 02:00:30.549: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 02:00:30.549: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 02:00:30.549: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 02:00:30.549: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bff8h to bootstrap-e2e-minion-group-s51h Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 737.160338ms (737.179651ms including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.876782326s (1.876796204s including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bv2w9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 680.977248ms (680.991364ms including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.818844362s (1.818852935s including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pn2qm to bootstrap-e2e-minion-group-7c3d Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 679.514836ms (679.523319ms including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.788401445s (1.788433466s including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qnhsn to bootstrap-e2e-master Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 888.975253ms (888.981818ms including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.94842067s (1.948435203s including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-qnhsn Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-pn2qm Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bff8h Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bv2w9 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-tj5j9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.279253505s (2.279262122s including waiting) Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.794216432s (3.794249509s including waiting) Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server-nanny Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server-nanny Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server-nanny Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-kkpk2 to bootstrap-e2e-minion-group-7c3d Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.249697964s (1.249709924s including waiting) Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 964.990126ms (965.003136ms including waiting) Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server-nanny Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server-nanny Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": context deadline exceeded Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-kkpk2 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.228985526s (2.228994351s including waiting) Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container volume-snapshot-controller Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container volume-snapshot-controller Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container volume-snapshot-controller Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f15bbfbe-0efc-4a1b-ab62-e07fa18067f5) Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:00:30.549 (50ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:00:30.549 Jan 29 02:00:30.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:00:30.597 (48ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 02:00:30.597 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 02:00:30.597 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:00:30.597 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:00:30.597 STEP: Collecting events from namespace "reboot-8100". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 02:00:30.597 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 02:00:30.638 Jan 29 02:00:30.679: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 02:00:30.679: INFO: Jan 29 02:00:30.726: INFO: Logging node info for node bootstrap-e2e-master Jan 29 02:00:30.777: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 09b38bdb-4830-432f-941a-7f47d2e4cb82 760 0 2023-01-29 01:56:15 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 01:56:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 01:57:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 01:57:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 01:57:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 01:57:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 01:57:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.48.38,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817944af0c35e596144cbe0c39ece004,SystemUUID:817944af-0c35-e596-144c-be0c39ece004,BootID:10741312-523c-4032-96d6-5f4f987f3139,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:00:30.778: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 02:00:30.820: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 02:00:30.882: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 29 02:00:30.882: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 01:55:48 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container l7-lb-controller ready: true, restart count 4 Jan 29 02:00:30.882: INFO: metadata-proxy-v0.1-qnhsn started at 2023-01-29 01:56:48 +0000 UTC (0+2 container statuses recorded) Jan 29 02:00:30.882: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 02:00:30.882: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 02:00:30.882: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 01:55:30 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container etcd-container ready: true, restart count 0 Jan 29 02:00:30.882: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container etcd-container ready: true, restart count 0 Jan 29 02:00:30.882: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container konnectivity-server-container ready: true, restart count 0 Jan 29 02:00:30.882: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container kube-apiserver ready: true, restart count 1 Jan 29 02:00:30.882: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container kube-scheduler ready: true, restart count 2 Jan 29 02:00:30.882: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 01:55:48 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container kube-addon-manager ready: true, restart count 0 Jan 29 02:00:31.108: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 02:00:31.108: INFO: Logging node info for node bootstrap-e2e-minion-group-6w15 Jan 29 02:00:31.150: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6w15 1fb28d13-4bf7-48f6-87ef-e22ff445a0fa 686 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6w15 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 01:56:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 01:56:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-6w15,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 01:56:51 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 01:56:51 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 01:56:51 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 01:56:51 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.188.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4953e80002e138ed6b9c633aa1bea962,SystemUUID:4953e800-02e1-38ed-6b9c-633aa1bea962,BootID:e1ee021c-d911-4c88-add0-6a97765e908a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:00:31.150: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6w15 Jan 29 02:00:31.194: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6w15 Jan 29 02:00:31.255: INFO: volume-snapshot-controller-0 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.255: INFO: Container volume-snapshot-controller ready: true, restart count 4 Jan 29 02:00:31.255: INFO: konnectivity-agent-x4gbp started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.255: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 29 02:00:31.255: INFO: kube-proxy-bootstrap-e2e-minion-group-6w15 started at 2023-01-29 01:56:20 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.255: INFO: Container kube-proxy ready: true, restart count 2 Jan 29 02:00:31.255: INFO: metadata-proxy-v0.1-bv2w9 started at 2023-01-29 01:56:21 +0000 UTC (0+2 container statuses recorded) Jan 29 02:00:31.255: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 02:00:31.255: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 02:00:31.255: INFO: kube-dns-autoscaler-5f6455f985-fths2 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.255: INFO: Container autoscaler ready: true, restart count 3 Jan 29 02:00:31.255: INFO: coredns-6846b5b5f-2nvv4 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.255: INFO: Container coredns ready: false, restart count 3 Jan 29 02:00:31.255: INFO: l7-default-backend-8549d69d99-9bf57 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.255: INFO: Container default-http-backend ready: true, restart count 1 Jan 29 02:00:31.426: INFO: Latency metrics for node bootstrap-e2e-minion-group-6w15 Jan 29 02:00:31.426: INFO: Logging node info for node bootstrap-e2e-minion-group-7c3d Jan 29 02:00:31.469: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7c3d 8e1fb573-c544-42e8-afb6-9489bf273e1f 798 0 2023-01-29 01:56:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7c3d kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-01-29 01:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 01:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-01-29 01:56:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 01:57:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-7c3d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 01:57:19 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 01:57:19 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 01:57:19 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 01:57:19 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.28.1,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e82fc84d3d165f0af5fb24e7309ec0f6,SystemUUID:e82fc84d-3d16-5f0a-f5fb-24e7309ec0f6,BootID:739a2898-bd99-4e3a-8596-b78c81d593c1,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:00:31.469: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7c3d Jan 29 02:00:31.512: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7c3d Jan 29 02:00:31.570: INFO: metadata-proxy-v0.1-pn2qm started at 2023-01-29 01:56:19 +0000 UTC (0+2 container statuses recorded) Jan 29 02:00:31.570: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 02:00:31.570: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 02:00:31.570: INFO: konnectivity-agent-rw7fw started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.570: INFO: Container konnectivity-agent ready: true, restart count 2 Jan 29 02:00:31.570: INFO: metrics-server-v0.5.2-867b8754b9-kkpk2 started at 2023-01-29 01:56:57 +0000 UTC (0+2 container statuses recorded) Jan 29 02:00:31.570: INFO: Container metrics-server ready: false, restart count 3 Jan 29 02:00:31.570: INFO: Container metrics-server-nanny ready: false, restart count 2 Jan 29 02:00:31.570: INFO: kube-proxy-bootstrap-e2e-minion-group-7c3d started at 2023-01-29 01:56:18 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.570: INFO: Container kube-proxy ready: true, restart count 2 Jan 29 02:00:31.739: INFO: Latency metrics for node bootstrap-e2e-minion-group-7c3d Jan 29 02:00:31.739: INFO: Logging node info for node bootstrap-e2e-minion-group-s51h Jan 29 02:00:31.782: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-s51h 889261a3-c23b-4a70-8491-293cc30164ed 680 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-s51h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 01:56:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 01:56:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-s51h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 01:56:50 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 01:56:50 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 01:56:50 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 01:56:50 +0000 UTC,LastTransitionTime:2023-01-29 01:56:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.145.127.28,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e84ea8c5f84b48682cb3668f2d7a776c,SystemUUID:e84ea8c5-f84b-4868-2cb3-668f2d7a776c,BootID:0132a2c1-d402-4210-807d-5fbc99b4e14d,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:00:31.782: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-s51h Jan 29 02:00:31.825: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-s51h Jan 29 02:00:31.885: INFO: kube-proxy-bootstrap-e2e-minion-group-s51h started at 2023-01-29 01:56:20 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.885: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 02:00:31.885: INFO: metadata-proxy-v0.1-bff8h started at 2023-01-29 01:56:21 +0000 UTC (0+2 container statuses recorded) Jan 29 02:00:31.885: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 02:00:31.885: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 02:00:31.885: INFO: konnectivity-agent-krs9s started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.885: INFO: Container konnectivity-agent ready: true, restart count 2 Jan 29 02:00:31.885: INFO: coredns-6846b5b5f-sch2n started at 2023-01-29 01:56:42 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.885: INFO: Container coredns ready: false, restart count 2 Jan 29 02:00:32.055: INFO: Latency metrics for node bootstrap-e2e-minion-group-s51h END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:00:32.055 (1.458s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:00:32.055 (1.458s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:00:32.055 STEP: Destroying namespace "reboot-8100" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 02:00:32.055 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:00:32.098 (43ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:00:32.098 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:00:32.098 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 02:00:30.499from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 01:58:10.983 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 01:58:10.983 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 01:58:10.983 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 01:58:10.983 Jan 29 01:58:10.983: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 01:58:10.984 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 01:58:11.167 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 01:58:11.247 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 01:58:11.327 (344ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 01:58:11.327 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 01:58:11.327 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 01:58:11.327 Jan 29 01:58:11.423: INFO: Getting bootstrap-e2e-minion-group-s51h Jan 29 01:58:11.465: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-s51h condition Ready to be true Jan 29 01:58:11.474: INFO: Getting bootstrap-e2e-minion-group-6w15 Jan 29 01:58:11.474: INFO: Getting bootstrap-e2e-minion-group-7c3d Jan 29 01:58:11.507: INFO: Node bootstrap-e2e-minion-group-s51h has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-s51h metadata-proxy-v0.1-bff8h] Jan 29 01:58:11.507: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-s51h metadata-proxy-v0.1-bff8h] Jan 29 01:58:11.507: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bff8h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.507: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-s51h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.516: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7c3d condition Ready to be true Jan 29 01:58:11.517: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6w15 condition Ready to be true Jan 29 01:58:11.550: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s51h": Phase="Running", Reason="", readiness=true. Elapsed: 42.908961ms Jan 29 01:58:11.550: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s51h" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.550: INFO: Pod "metadata-proxy-v0.1-bff8h": Phase="Running", Reason="", readiness=true. Elapsed: 43.221172ms Jan 29 01:58:11.550: INFO: Pod "metadata-proxy-v0.1-bff8h" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.550: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-s51h metadata-proxy-v0.1-bff8h] Jan 29 01:58:11.550: INFO: Getting external IP address for bootstrap-e2e-minion-group-s51h Jan 29 01:58:11.550: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-s51h(34.145.127.28:22) Jan 29 01:58:11.559: INFO: Node bootstrap-e2e-minion-group-6w15 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9 volume-snapshot-controller-0] Jan 29 01:58:11.559: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9 volume-snapshot-controller-0] Jan 29 01:58:11.559: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.559: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-fths2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.559: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6w15" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.560: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bv2w9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.560: INFO: Node bootstrap-e2e-minion-group-7c3d has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-7c3d metadata-proxy-v0.1-pn2qm] Jan 29 01:58:11.560: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-7c3d metadata-proxy-v0.1-pn2qm] Jan 29 01:58:11.560: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-pn2qm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.560: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7c3d" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 01:58:11.605: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 45.528708ms Jan 29 01:58:11.605: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.606: INFO: Pod "kube-dns-autoscaler-5f6455f985-fths2": Phase="Running", Reason="", readiness=true. Elapsed: 46.099408ms Jan 29 01:58:11.606: INFO: Pod "kube-dns-autoscaler-5f6455f985-fths2" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.606: INFO: Pod "metadata-proxy-v0.1-bv2w9": Phase="Running", Reason="", readiness=true. Elapsed: 46.138258ms Jan 29 01:58:11.606: INFO: Pod "metadata-proxy-v0.1-bv2w9" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.606: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6w15": Phase="Running", Reason="", readiness=true. Elapsed: 46.273849ms Jan 29 01:58:11.606: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6w15" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.606: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9 volume-snapshot-controller-0] Jan 29 01:58:11.606: INFO: Getting external IP address for bootstrap-e2e-minion-group-6w15 Jan 29 01:58:11.606: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-6w15(35.233.188.19:22) Jan 29 01:58:11.606: INFO: Pod "metadata-proxy-v0.1-pn2qm": Phase="Running", Reason="", readiness=true. Elapsed: 46.753713ms Jan 29 01:58:11.606: INFO: Pod "metadata-proxy-v0.1-pn2qm" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.607: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d": Phase="Running", Reason="", readiness=true. Elapsed: 46.877658ms Jan 29 01:58:11.607: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d" satisfied condition "running and ready, or succeeded" Jan 29 01:58:11.607: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-7c3d metadata-proxy-v0.1-pn2qm] Jan 29 01:58:11.607: INFO: Getting external IP address for bootstrap-e2e-minion-group-7c3d Jan 29 01:58:11.607: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-7c3d(35.247.28.1:22) Jan 29 01:58:12.070: INFO: ssh prow@34.145.127.28:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 01:58:12.070: INFO: ssh prow@34.145.127.28:22: stdout: "" Jan 29 01:58:12.070: INFO: ssh prow@34.145.127.28:22: stderr: "" Jan 29 01:58:12.070: INFO: ssh prow@34.145.127.28:22: exit code: 0 Jan 29 01:58:12.070: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-s51h condition Ready to be false Jan 29 01:58:12.112: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:12.125: INFO: ssh prow@35.247.28.1:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 01:58:12.125: INFO: ssh prow@35.247.28.1:22: stdout: "" Jan 29 01:58:12.125: INFO: ssh prow@35.247.28.1:22: stderr: "" Jan 29 01:58:12.125: INFO: ssh prow@35.247.28.1:22: exit code: 0 Jan 29 01:58:12.125: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-7c3d condition Ready to be false Jan 29 01:58:12.125: INFO: ssh prow@35.233.188.19:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 01:58:12.125: INFO: ssh prow@35.233.188.19:22: stdout: "" Jan 29 01:58:12.125: INFO: ssh prow@35.233.188.19:22: stderr: "" Jan 29 01:58:12.125: INFO: ssh prow@35.233.188.19:22: exit code: 0 Jan 29 01:58:12.125: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6w15 condition Ready to be false Jan 29 01:58:12.169: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:12.169: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:14.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:14.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:14.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:16.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:16.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:16.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:18.241: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:18.302: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:18.302: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:20.284: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:20.346: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:20.346: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:22.327: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:22.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:22.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:24.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:24.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:24.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:26.414: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:26.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:26.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:28.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:28.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:28.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:30.499: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:30.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:30.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:32.542: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:32.610: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:32.610: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:34.587: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:34.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:34.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:58:36.627: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:36.695: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:36.695: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:38.667: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:38.734: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:38.735: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:40.708: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:40.775: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:40.775: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:42.748: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:42.814: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:42.814: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:44.787: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:44.854: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:44.854: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:46.827: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:46.894: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:46.894: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:48.868: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:48.934: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:48.934: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:50.907: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:50.974: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:50.974: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:52.947: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:53.014: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:53.014: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:54.987: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:55.054: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:55.054: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:57.026: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:57.094: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:58:57.094: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:59.066: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:58:59.134: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:58:59.134: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:59:01.106: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:59:01.173: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:59:01.173: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:59:03.145: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:59:03.213: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:59:03.213: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:59:05.185: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 01:59:05.253: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 01:59:05.253: INFO: Couldn't get node bootstrap-e2e-minion-group-7c3d Jan 29 01:59:13.985: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:13.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:14.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:16.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:16.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:16.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:18.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:18.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:18.200: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:20.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:20.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:20.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:22.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:22.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:22.300: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:24.347: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:24.347: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:24.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:26.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:26.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:26.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:28.443: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:28.443: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:28.443: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:30.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:30.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:30.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:32.539: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:32.539: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:32.539: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:34.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:34.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:34.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:36.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:36.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:36.637: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:38.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:38.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:38.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:40.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:40.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:40.732: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:42.780: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:42.780: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:42.780: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:44.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:44.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:44.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:46.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:46.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:46.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:48.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:48.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:48.926: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:50.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:50.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:50.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:53.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:53.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:53.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:55.068: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:55.068: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:55.068: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:57.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:57.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:57.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:59.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:59.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 01:59:59.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:01.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:01.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:01.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:03.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:03.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:03.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:05.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:05.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:05.307: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:07.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:07.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:07.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:09.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:09.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:09.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:11.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:11.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:11.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:00:13.454: INFO: Node bootstrap-e2e-minion-group-7c3d didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:00:13.454: INFO: Node bootstrap-e2e-minion-group-s51h didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:00:13.454: INFO: Node bootstrap-e2e-minion-group-6w15 didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:00:13.454: INFO: Node bootstrap-e2e-minion-group-6w15 failed reboot test. Jan 29 02:00:13.454: INFO: Node bootstrap-e2e-minion-group-7c3d failed reboot test. Jan 29 02:00:13.454: INFO: Node bootstrap-e2e-minion-group-s51h failed reboot test. Jan 29 02:00:13.455: INFO: Executing termination hook on nodes Jan 29 02:00:13.455: INFO: Getting external IP address for bootstrap-e2e-minion-group-6w15 Jan 29 02:00:13.455: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-6w15(35.233.188.19:22) Jan 29 02:00:29.456: INFO: ssh prow@35.233.188.19:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 02:00:29.456: INFO: ssh prow@35.233.188.19:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 01:58:22 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 02:00:29.456: INFO: ssh prow@35.233.188.19:22: stderr: "" Jan 29 02:00:29.456: INFO: ssh prow@35.233.188.19:22: exit code: 0 Jan 29 02:00:29.456: INFO: Getting external IP address for bootstrap-e2e-minion-group-7c3d Jan 29 02:00:29.456: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-7c3d(35.247.28.1:22) Jan 29 02:00:29.974: INFO: ssh prow@35.247.28.1:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 02:00:29.974: INFO: ssh prow@35.247.28.1:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 01:58:22 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 02:00:29.974: INFO: ssh prow@35.247.28.1:22: stderr: "" Jan 29 02:00:29.974: INFO: ssh prow@35.247.28.1:22: exit code: 0 Jan 29 02:00:29.974: INFO: Getting external IP address for bootstrap-e2e-minion-group-s51h Jan 29 02:00:29.974: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-s51h(34.145.127.28:22) Jan 29 02:00:30.498: INFO: ssh prow@34.145.127.28:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 02:00:30.498: INFO: ssh prow@34.145.127.28:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 01:58:22 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 02:00:30.498: INFO: ssh prow@34.145.127.28:22: stderr: "" Jan 29 02:00:30.498: INFO: ssh prow@34.145.127.28:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 02:00:30.499 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 02:00:30.499 (2m19.171s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:00:30.499 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 02:00:30.499 Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-2nvv4 to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.229909205s (4.229917066s including waiting) Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container coredns Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container coredns Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container coredns Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-2nvv4_kube-system(c5a7c76e-33f7-4271-a7f7-8f4b6013857d) Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-sch2n to bootstrap-e2e-minion-group-s51h Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 968.405842ms (968.417139ms including waiting) Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container coredns Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container coredns Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container coredns Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-2nvv4 Jan 29 02:00:30.548: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-sch2n Jan 29 02:00:30.548: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 02:00:30.548: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 02:00:30.548: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d580 became leader Jan 29 02:00:30.548: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b84f3 became leader Jan 29 02:00:30.548: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1f6a8 became leader Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-krs9s to bootstrap-e2e-minion-group-s51h Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 589.41049ms (589.437215ms including waiting) Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container konnectivity-agent Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container konnectivity-agent Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:00:30.548: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:00:30.548: INFO: event for konnectivity-agent-rw7fw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rw7fw to bootstrap-e2e-minion-group-7c3d Jan 29 02:00:30.548: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:00:30.548: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 627.397814ms (627.417417ms including waiting) Jan 29 02:00:30.548: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:00:30.548: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:00:30.548: INFO: event for konnectivity-agent-x4gbp: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-x4gbp to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.548: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:00:30.548: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.54378487s (2.543795192s including waiting) Jan 29 02:00:30.548: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container konnectivity-agent Jan 29 02:00:30.548: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container konnectivity-agent Jan 29 02:00:30.549: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container konnectivity-agent Jan 29 02:00:30.549: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:00:30.549: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-x4gbp_kube-system(5cc4536d-8554-405a-ac44-b9cd0b3e7168) Jan 29 02:00:30.549: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.549: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rw7fw Jan 29 02:00:30.549: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-x4gbp Jan 29 02:00:30.549: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-krs9s Jan 29 02:00:30.549: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 02:00:30.549: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:00:30.549: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:00:30.549: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:00:30.549: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 02:00:30.549: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 02:00:30.549: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 02:00:30.549: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8c88f9f3-0fcf-4820-9f5f-5ee5c968f50d became leader Jan 29 02:00:30.549: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_e5ddf3f0-26c9-4d3b-ba00-8f32b5849ba5 became leader Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-fths2 to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 4.452281102s (4.452289884s including waiting) Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container autoscaler Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container autoscaler Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container autoscaler Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fths2_kube-system(29242a59-ceae-4689-899f-a4b3bcf58fbe) Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-fths2 Jan 29 02:00:30.549: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6w15_kube-system(04a1e6edd54c1866478f181a6bf60b38) Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container kube-proxy Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-s51h_kube-system(2451b12f9e04e1f8e16fde66c2622fcd) Jan 29 02:00:30.549: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:00:30.549: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 02:00:30.549: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 02:00:30.549: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 02:00:30.549: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 02:00:30.549: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_a9b313b0-f9fa-43de-b979-0958c05e1287 became leader Jan 29 02:00:30.549: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ecac3899-f709-4f43-824f-37faa839889c became leader Jan 29 02:00:30.549: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_460317a8-6d35-4656-87b9-0d8d3533477a became leader Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-9bf57 to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 573.484189ms (573.492084ms including waiting) Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container default-http-backend Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container default-http-backend Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.549: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-9bf57 Jan 29 02:00:30.549: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 02:00:30.549: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 02:00:30.549: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 02:00:30.549: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 02:00:30.549: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 02:00:30.549: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 02:00:30.549: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bff8h to bootstrap-e2e-minion-group-s51h Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 737.160338ms (737.179651ms including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.876782326s (1.876796204s including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bv2w9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 680.977248ms (680.991364ms including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.818844362s (1.818852935s including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pn2qm to bootstrap-e2e-minion-group-7c3d Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 679.514836ms (679.523319ms including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.788401445s (1.788433466s including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qnhsn to bootstrap-e2e-master Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 888.975253ms (888.981818ms including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.94842067s (1.948435203s including waiting) Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-qnhsn Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-pn2qm Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bff8h Jan 29 02:00:30.549: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bv2w9 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-tj5j9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.279253505s (2.279262122s including waiting) Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.794216432s (3.794249509s including waiting) Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server-nanny Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server-nanny Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server-nanny Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-kkpk2 to bootstrap-e2e-minion-group-7c3d Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.249697964s (1.249709924s including waiting) Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 964.990126ms (965.003136ms including waiting) Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server-nanny Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server-nanny Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": context deadline exceeded Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-kkpk2 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 02:00:30.549: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-6w15 Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.228985526s (2.228994351s including waiting) Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container volume-snapshot-controller Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container volume-snapshot-controller Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container volume-snapshot-controller Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f15bbfbe-0efc-4a1b-ab62-e07fa18067f5) Jan 29 02:00:30.549: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:00:30.549 (50ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:00:30.549 Jan 29 02:00:30.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:00:30.597 (48ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 02:00:30.597 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 02:00:30.597 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:00:30.597 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:00:30.597 STEP: Collecting events from namespace "reboot-8100". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 02:00:30.597 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 02:00:30.638 Jan 29 02:00:30.679: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 02:00:30.679: INFO: Jan 29 02:00:30.726: INFO: Logging node info for node bootstrap-e2e-master Jan 29 02:00:30.777: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 09b38bdb-4830-432f-941a-7f47d2e4cb82 760 0 2023-01-29 01:56:15 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 01:56:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 01:57:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 01:57:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 01:57:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 01:57:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 01:57:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.48.38,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817944af0c35e596144cbe0c39ece004,SystemUUID:817944af-0c35-e596-144c-be0c39ece004,BootID:10741312-523c-4032-96d6-5f4f987f3139,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:00:30.778: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 02:00:30.820: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 02:00:30.882: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 29 02:00:30.882: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 01:55:48 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container l7-lb-controller ready: true, restart count 4 Jan 29 02:00:30.882: INFO: metadata-proxy-v0.1-qnhsn started at 2023-01-29 01:56:48 +0000 UTC (0+2 container statuses recorded) Jan 29 02:00:30.882: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 02:00:30.882: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 02:00:30.882: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 01:55:30 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container etcd-container ready: true, restart count 0 Jan 29 02:00:30.882: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container etcd-container ready: true, restart count 0 Jan 29 02:00:30.882: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container konnectivity-server-container ready: true, restart count 0 Jan 29 02:00:30.882: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container kube-apiserver ready: true, restart count 1 Jan 29 02:00:30.882: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container kube-scheduler ready: true, restart count 2 Jan 29 02:00:30.882: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 01:55:48 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:30.882: INFO: Container kube-addon-manager ready: true, restart count 0 Jan 29 02:00:31.108: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 02:00:31.108: INFO: Logging node info for node bootstrap-e2e-minion-group-6w15 Jan 29 02:00:31.150: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6w15 1fb28d13-4bf7-48f6-87ef-e22ff445a0fa 686 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6w15 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 01:56:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 01:56:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-6w15,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 01:56:24 +0000 UTC,LastTransitionTime:2023-01-29 01:56:23 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 01:56:51 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 01:56:51 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 01:56:51 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 01:56:51 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.188.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4953e80002e138ed6b9c633aa1bea962,SystemUUID:4953e800-02e1-38ed-6b9c-633aa1bea962,BootID:e1ee021c-d911-4c88-add0-6a97765e908a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:00:31.150: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6w15 Jan 29 02:00:31.194: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6w15 Jan 29 02:00:31.255: INFO: volume-snapshot-controller-0 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.255: INFO: Container volume-snapshot-controller ready: true, restart count 4 Jan 29 02:00:31.255: INFO: konnectivity-agent-x4gbp started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.255: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 29 02:00:31.255: INFO: kube-proxy-bootstrap-e2e-minion-group-6w15 started at 2023-01-29 01:56:20 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.255: INFO: Container kube-proxy ready: true, restart count 2 Jan 29 02:00:31.255: INFO: metadata-proxy-v0.1-bv2w9 started at 2023-01-29 01:56:21 +0000 UTC (0+2 container statuses recorded) Jan 29 02:00:31.255: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 02:00:31.255: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 02:00:31.255: INFO: kube-dns-autoscaler-5f6455f985-fths2 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.255: INFO: Container autoscaler ready: true, restart count 3 Jan 29 02:00:31.255: INFO: coredns-6846b5b5f-2nvv4 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.255: INFO: Container coredns ready: false, restart count 3 Jan 29 02:00:31.255: INFO: l7-default-backend-8549d69d99-9bf57 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.255: INFO: Container default-http-backend ready: true, restart count 1 Jan 29 02:00:31.426: INFO: Latency metrics for node bootstrap-e2e-minion-group-6w15 Jan 29 02:00:31.426: INFO: Logging node info for node bootstrap-e2e-minion-group-7c3d Jan 29 02:00:31.469: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7c3d 8e1fb573-c544-42e8-afb6-9489bf273e1f 798 0 2023-01-29 01:56:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7c3d kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2023-01-29 01:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 01:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-01-29 01:56:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 01:57:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-7c3d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:22 +0000 UTC,LastTransitionTime:2023-01-29 01:56:21 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 01:57:19 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 01:57:19 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 01:57:19 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 01:57:19 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.28.1,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e82fc84d3d165f0af5fb24e7309ec0f6,SystemUUID:e82fc84d-3d16-5f0a-f5fb-24e7309ec0f6,BootID:739a2898-bd99-4e3a-8596-b78c81d593c1,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:00:31.469: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7c3d Jan 29 02:00:31.512: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7c3d Jan 29 02:00:31.570: INFO: metadata-proxy-v0.1-pn2qm started at 2023-01-29 01:56:19 +0000 UTC (0+2 container statuses recorded) Jan 29 02:00:31.570: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 02:00:31.570: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 02:00:31.570: INFO: konnectivity-agent-rw7fw started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.570: INFO: Container konnectivity-agent ready: true, restart count 2 Jan 29 02:00:31.570: INFO: metrics-server-v0.5.2-867b8754b9-kkpk2 started at 2023-01-29 01:56:57 +0000 UTC (0+2 container statuses recorded) Jan 29 02:00:31.570: INFO: Container metrics-server ready: false, restart count 3 Jan 29 02:00:31.570: INFO: Container metrics-server-nanny ready: false, restart count 2 Jan 29 02:00:31.570: INFO: kube-proxy-bootstrap-e2e-minion-group-7c3d started at 2023-01-29 01:56:18 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.570: INFO: Container kube-proxy ready: true, restart count 2 Jan 29 02:00:31.739: INFO: Latency metrics for node bootstrap-e2e-minion-group-7c3d Jan 29 02:00:31.739: INFO: Logging node info for node bootstrap-e2e-minion-group-s51h Jan 29 02:00:31.782: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-s51h 889261a3-c23b-4a70-8491-293cc30164ed 680 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-s51h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 01:56:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 01:56:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-s51h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 01:56:25 +0000 UTC,LastTransitionTime:2023-01-29 01:56:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 01:56:50 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 01:56:50 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 01:56:50 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 01:56:50 +0000 UTC,LastTransitionTime:2023-01-29 01:56:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.145.127.28,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e84ea8c5f84b48682cb3668f2d7a776c,SystemUUID:e84ea8c5-f84b-4868-2cb3-668f2d7a776c,BootID:0132a2c1-d402-4210-807d-5fbc99b4e14d,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:00:31.782: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-s51h Jan 29 02:00:31.825: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-s51h Jan 29 02:00:31.885: INFO: kube-proxy-bootstrap-e2e-minion-group-s51h started at 2023-01-29 01:56:20 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.885: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 02:00:31.885: INFO: metadata-proxy-v0.1-bff8h started at 2023-01-29 01:56:21 +0000 UTC (0+2 container statuses recorded) Jan 29 02:00:31.885: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 02:00:31.885: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 02:00:31.885: INFO: konnectivity-agent-krs9s started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.885: INFO: Container konnectivity-agent ready: true, restart count 2 Jan 29 02:00:31.885: INFO: coredns-6846b5b5f-sch2n started at 2023-01-29 01:56:42 +0000 UTC (0+1 container statuses recorded) Jan 29 02:00:31.885: INFO: Container coredns ready: false, restart count 2 Jan 29 02:00:32.055: INFO: Latency metrics for node bootstrap-e2e-minion-group-s51h END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:00:32.055 (1.458s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:00:32.055 (1.458s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:00:32.055 STEP: Destroying namespace "reboot-8100" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 02:00:32.055 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:00:32.098 (43ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:00:32.098 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:00:32.098 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] wait for service account "default" in namespace "reboot-9826": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 02:11:18.898from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 02:09:18.85 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 02:09:18.85 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 02:09:18.85 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 02:09:18.85 Jan 29 02:09:18.850: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 02:09:18.852 Jan 29 02:11:18.897: INFO: Unexpected error: <*fmt.wrapError | 0xc0043d2000>: { msg: "wait for service account \"default\" in namespace \"reboot-9826\": timed out waiting for the condition", err: <*errors.errorString | 0xc0001c9af0>{ s: "timed out waiting for the condition", }, } [FAILED] wait for service account "default" in namespace "reboot-9826": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 02:11:18.898 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 02:11:18.898 (2m0.047s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:11:18.898 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 02:11:18.898 Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-2nvv4 to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.229909205s (4.229917066s including waiting) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-2nvv4_kube-system(c5a7c76e-33f7-4271-a7f7-8f4b6013857d) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.18:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-sch2n to bootstrap-e2e-minion-group-s51h Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 968.405842ms (968.417139ms including waiting) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-sch2n_kube-system(0ca61b79-17d9-42ef-bece-365ae3a67989) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-2nvv4 Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-sch2n Jan 29 02:11:18.949: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 02:11:18.949: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 02:11:18.949: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 02:11:18.949: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 02:11:18.949: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 02:11:18.949: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 02:11:18.949: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d580 became leader Jan 29 02:11:18.949: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b84f3 became leader Jan 29 02:11:18.949: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1f6a8 became leader Jan 29 02:11:18.949: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_d2447 became leader Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-krs9s to bootstrap-e2e-minion-group-s51h Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 589.41049ms (589.437215ms including waiting) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rw7fw to bootstrap-e2e-minion-group-7c3d Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 627.397814ms (627.417417ms including waiting) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rw7fw_kube-system(6c6104fa-8a94-4417-b2d9-dbd47d6240f2) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-x4gbp to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.54378487s (2.543795192s including waiting) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-x4gbp_kube-system(5cc4536d-8554-405a-ac44-b9cd0b3e7168) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-x4gbp_kube-system(5cc4536d-8554-405a-ac44-b9cd0b3e7168) Jan 29 02:11:18.949: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rw7fw Jan 29 02:11:18.949: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-x4gbp Jan 29 02:11:18.949: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-krs9s Jan 29 02:11:18.949: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 02:11:18.949: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 02:11:18.949: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 02:11:18.949: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 02:11:18.949: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 02:11:18.949: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 02:11:18.949: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 02:11:18.949: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 02:11:18.949: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 02:11:18.949: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:11:18.949: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:11:18.949: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 02:11:18.949: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 02:11:18.949: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 02:11:18.949: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 02:11:18.949: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8c88f9f3-0fcf-4820-9f5f-5ee5c968f50d became leader Jan 29 02:11:18.949: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_e5ddf3f0-26c9-4d3b-ba00-8f32b5849ba5 became leader Jan 29 02:11:18.949: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f4908364-bab0-42a0-b122-c2caa2e85f9f became leader Jan 29 02:11:18.949: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_712d0dd3-6c1a-4e1f-b3cc-88b0c22b6924 became leader Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-fths2 to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 4.452281102s (4.452289884s including waiting) Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container autoscaler Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container autoscaler Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container autoscaler Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fths2_kube-system(29242a59-ceae-4689-899f-a4b3bcf58fbe) Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container autoscaler Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container autoscaler Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container autoscaler Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fths2_kube-system(29242a59-ceae-4689-899f-a4b3bcf58fbe) Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-fths2 Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6w15_kube-system(04a1e6edd54c1866478f181a6bf60b38) Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6w15_kube-system(04a1e6edd54c1866478f181a6bf60b38) Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-s51h_kube-system(2451b12f9e04e1f8e16fde66c2622fcd) Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 02:11:18.949: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 02:11:18.949: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 02:11:18.949: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 02:11:18.949: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_a9b313b0-f9fa-43de-b979-0958c05e1287 became leader Jan 29 02:11:18.949: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ecac3899-f709-4f43-824f-37faa839889c became leader Jan 29 02:11:18.949: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_460317a8-6d35-4656-87b9-0d8d3533477a became leader Jan 29 02:11:18.949: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_627862b6-098a-451d-a466-095484f8ed41 became leader Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-9bf57 to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 573.484189ms (573.492084ms including waiting) Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container default-http-backend Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container default-http-backend Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container default-http-backend Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container default-http-backend Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-9bf57 Jan 29 02:11:18.949: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 02:11:18.949: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 02:11:18.949: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 02:11:18.949: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 02:11:18.949: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 02:11:18.949: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 02:11:18.949: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bff8h to bootstrap-e2e-minion-group-s51h Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 737.160338ms (737.179651ms including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.876782326s (1.876796204s including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bv2w9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 680.977248ms (680.991364ms including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.818844362s (1.818852935s including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pn2qm to bootstrap-e2e-minion-group-7c3d Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 679.514836ms (679.523319ms including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.788401445s (1.788433466s including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qnhsn to bootstrap-e2e-master Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 888.975253ms (888.981818ms including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.94842067s (1.948435203s including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-qnhsn Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-pn2qm Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bff8h Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bv2w9 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-tj5j9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.279253505s (2.279262122s including waiting) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.794216432s (3.794249509s including waiting) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-kkpk2 to bootstrap-e2e-minion-group-7c3d Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.249697964s (1.249709924s including waiting) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 964.990126ms (965.003136ms including waiting) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": context deadline exceeded Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-kkpk2_kube-system(479216da-5769-49ec-9587-0666568c1790) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-kkpk2_kube-system(479216da-5769-49ec-9587-0666568c1790) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-kkpk2 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.228985526s (2.228994351s including waiting) Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container volume-snapshot-controller Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container volume-snapshot-controller Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container volume-snapshot-controller Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f15bbfbe-0efc-4a1b-ab62-e07fa18067f5) Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container volume-snapshot-controller Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container volume-snapshot-controller Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container volume-snapshot-controller Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f15bbfbe-0efc-4a1b-ab62-e07fa18067f5) Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:11:18.949 (52ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:11:18.949 Jan 29 02:11:18.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:11:18.992 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:11:18.992 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:11:18.992 STEP: Collecting events from namespace "reboot-9826". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 02:11:18.992 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 02:11:19.034 Jan 29 02:11:19.076: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 02:11:19.076: INFO: Jan 29 02:11:19.119: INFO: Logging node info for node bootstrap-e2e-master Jan 29 02:11:19.161: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 09b38bdb-4830-432f-941a-7f47d2e4cb82 1803 0 2023-01-29 01:56:15 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 01:56:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 02:07:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:13 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:13 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:13 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:07:13 +0000 UTC,LastTransitionTime:2023-01-29 01:56:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.48.38,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817944af0c35e596144cbe0c39ece004,SystemUUID:817944af-0c35-e596-144c-be0c39ece004,BootID:10741312-523c-4032-96d6-5f4f987f3139,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:11:19.162: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 02:11:19.206: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 02:11:19.262: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 01:55:30 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container etcd-container ready: true, restart count 1 Jan 29 02:11:19.262: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container etcd-container ready: true, restart count 0 Jan 29 02:11:19.262: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container konnectivity-server-container ready: true, restart count 1 Jan 29 02:11:19.262: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container kube-controller-manager ready: false, restart count 6 Jan 29 02:11:19.262: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 01:55:48 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container l7-lb-controller ready: true, restart count 5 Jan 29 02:11:19.262: INFO: metadata-proxy-v0.1-qnhsn started at 2023-01-29 01:56:48 +0000 UTC (0+2 container statuses recorded) Jan 29 02:11:19.262: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 02:11:19.262: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 02:11:19.262: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container kube-apiserver ready: true, restart count 2 Jan 29 02:11:19.262: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container kube-scheduler ready: true, restart count 3 Jan 29 02:11:19.262: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 01:55:48 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container kube-addon-manager ready: false, restart count 2 Jan 29 02:11:19.446: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 02:11:19.446: INFO: Logging node info for node bootstrap-e2e-minion-group-6w15 Jan 29 02:11:19.489: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6w15 1fb28d13-4bf7-48f6-87ef-e22ff445a0fa 2107 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6w15 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 02:07:06 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 02:09:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-6w15,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:05 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:05 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:05 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:07:05 +0000 UTC,LastTransitionTime:2023-01-29 02:07:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.188.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4953e80002e138ed6b9c633aa1bea962,SystemUUID:4953e800-02e1-38ed-6b9c-633aa1bea962,BootID:de7cc9dc-cf41-49bc-9f0a-238c12b78432,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:11:19.490: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6w15 Jan 29 02:11:19.535: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6w15 Jan 29 02:11:19.638: INFO: volume-snapshot-controller-0 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.638: INFO: Container volume-snapshot-controller ready: false, restart count 7 Jan 29 02:11:19.638: INFO: coredns-6846b5b5f-2nvv4 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.638: INFO: Container coredns ready: true, restart count 4 Jan 29 02:11:19.638: INFO: metadata-proxy-v0.1-bv2w9 started at 2023-01-29 01:56:21 +0000 UTC (0+2 container statuses recorded) Jan 29 02:11:19.638: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 02:11:19.638: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 02:11:19.638: INFO: konnectivity-agent-x4gbp started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.638: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 29 02:11:19.638: INFO: kube-proxy-bootstrap-e2e-minion-group-6w15 started at 2023-01-29 01:56:20 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.638: INFO: Container kube-proxy ready: false, restart count 6 Jan 29 02:11:19.638: INFO: l7-default-backend-8549d69d99-9bf57 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.638: INFO: Container default-http-backend ready: true, restart count 2 Jan 29 02:11:19.638: INFO: kube-dns-autoscaler-5f6455f985-fths2 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.638: INFO: Container autoscaler ready: true, restart count 5 Jan 29 02:11:19.825: INFO: Latency metrics for node bootstrap-e2e-minion-group-6w15 Jan 29 02:11:19.825: INFO: Logging node info for node bootstrap-e2e-minion-group-7c3d Jan 29 02:11:19.869: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7c3d 8e1fb573-c544-42e8-afb6-9489bf273e1f 2069 0 2023-01-29 01:56:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7c3d kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:03:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 02:07:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 02:09:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-7c3d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 02:04:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.28.1,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e82fc84d3d165f0af5fb24e7309ec0f6,SystemUUID:e82fc84d-3d16-5f0a-f5fb-24e7309ec0f6,BootID:d8228130-72eb-4a47-9a62-918a765d9db2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:11:19.870: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7c3d Jan 29 02:11:19.920: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7c3d Jan 29 02:11:20.027: INFO: kube-proxy-bootstrap-e2e-minion-group-7c3d started at 2023-01-29 02:03:34 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:20.027: INFO: Container kube-proxy ready: true, restart count 5 Jan 29 02:11:20.027: INFO: metadata-proxy-v0.1-pn2qm started at 2023-01-29 01:56:19 +0000 UTC (0+2 container statuses recorded) Jan 29 02:11:20.027: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 02:11:20.027: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 02:11:20.027: INFO: konnectivity-agent-rw7fw started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:20.027: INFO: Container konnectivity-agent ready: true, restart count 4 Jan 29 02:11:20.027: INFO: metrics-server-v0.5.2-867b8754b9-kkpk2 started at 2023-01-29 01:56:57 +0000 UTC (0+2 container statuses recorded) Jan 29 02:11:20.027: INFO: Container metrics-server ready: false, restart count 7 Jan 29 02:11:20.027: INFO: Container metrics-server-nanny ready: false, restart count 6 Jan 29 02:11:20.212: INFO: Latency metrics for node bootstrap-e2e-minion-group-7c3d Jan 29 02:11:20.212: INFO: Logging node info for node bootstrap-e2e-minion-group-s51h Jan 29 02:11:20.255: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-s51h 889261a3-c23b-4a70-8491-293cc30164ed 2112 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-s51h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 02:07:06 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 02:09:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-s51h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:06 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:06 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:06 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:07:06 +0000 UTC,LastTransitionTime:2023-01-29 02:07:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.145.127.28,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e84ea8c5f84b48682cb3668f2d7a776c,SystemUUID:e84ea8c5-f84b-4868-2cb3-668f2d7a776c,BootID:d00f00b1-34f8-4b2c-87f8-05ec98efeca6,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:11:20.256: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-s51h Jan 29 02:11:20.302: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-s51h Jan 29 02:11:20.392: INFO: coredns-6846b5b5f-sch2n started at 2023-01-29 01:56:42 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:20.392: INFO: Container coredns ready: true, restart count 5 Jan 29 02:11:20.392: INFO: kube-proxy-bootstrap-e2e-minion-group-s51h started at 2023-01-29 01:56:20 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:20.392: INFO: Container kube-proxy ready: true, restart count 6 Jan 29 02:11:20.392: INFO: metadata-proxy-v0.1-bff8h started at 2023-01-29 01:56:21 +0000 UTC (0+2 container statuses recorded) Jan 29 02:11:20.392: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 02:11:20.392: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 02:11:20.392: INFO: konnectivity-agent-krs9s started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:20.392: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 29 02:11:20.563: INFO: Latency metrics for node bootstrap-e2e-minion-group-s51h END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:11:20.563 (1.57s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:11:20.563 (1.57s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:11:20.563 STEP: Destroying namespace "reboot-9826" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 02:11:20.563 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:11:20.607 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:11:20.608 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:11:20.608 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] wait for service account "default" in namespace "reboot-9826": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 02:11:18.898from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 02:09:18.85 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 02:09:18.85 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 02:09:18.85 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 02:09:18.85 Jan 29 02:09:18.850: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 02:09:18.852 Jan 29 02:11:18.897: INFO: Unexpected error: <*fmt.wrapError | 0xc0043d2000>: { msg: "wait for service account \"default\" in namespace \"reboot-9826\": timed out waiting for the condition", err: <*errors.errorString | 0xc0001c9af0>{ s: "timed out waiting for the condition", }, } [FAILED] wait for service account "default" in namespace "reboot-9826": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 02:11:18.898 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 02:11:18.898 (2m0.047s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:11:18.898 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 02:11:18.898 Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-2nvv4 to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.229909205s (4.229917066s including waiting) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-2nvv4_kube-system(c5a7c76e-33f7-4271-a7f7-8f4b6013857d) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.18:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-sch2n to bootstrap-e2e-minion-group-s51h Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 968.405842ms (968.417139ms including waiting) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container coredns Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-sch2n_kube-system(0ca61b79-17d9-42ef-bece-365ae3a67989) Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-2nvv4 Jan 29 02:11:18.949: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-sch2n Jan 29 02:11:18.949: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 02:11:18.949: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 02:11:18.949: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 02:11:18.949: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 02:11:18.949: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 02:11:18.949: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 02:11:18.949: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d580 became leader Jan 29 02:11:18.949: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b84f3 became leader Jan 29 02:11:18.949: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1f6a8 became leader Jan 29 02:11:18.949: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_d2447 became leader Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-krs9s to bootstrap-e2e-minion-group-s51h Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 589.41049ms (589.437215ms including waiting) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rw7fw to bootstrap-e2e-minion-group-7c3d Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 627.397814ms (627.417417ms including waiting) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rw7fw_kube-system(6c6104fa-8a94-4417-b2d9-dbd47d6240f2) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-x4gbp to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.54378487s (2.543795192s including waiting) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-x4gbp_kube-system(5cc4536d-8554-405a-ac44-b9cd0b3e7168) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container konnectivity-agent Jan 29 02:11:18.949: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-x4gbp_kube-system(5cc4536d-8554-405a-ac44-b9cd0b3e7168) Jan 29 02:11:18.949: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rw7fw Jan 29 02:11:18.949: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-x4gbp Jan 29 02:11:18.949: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-krs9s Jan 29 02:11:18.949: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 02:11:18.949: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 02:11:18.949: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 02:11:18.949: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 02:11:18.949: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 02:11:18.949: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 02:11:18.949: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 02:11:18.949: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 02:11:18.949: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 02:11:18.949: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 02:11:18.949: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:11:18.949: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:11:18.949: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 02:11:18.949: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 02:11:18.949: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 02:11:18.949: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 02:11:18.949: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8c88f9f3-0fcf-4820-9f5f-5ee5c968f50d became leader Jan 29 02:11:18.949: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_e5ddf3f0-26c9-4d3b-ba00-8f32b5849ba5 became leader Jan 29 02:11:18.949: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f4908364-bab0-42a0-b122-c2caa2e85f9f became leader Jan 29 02:11:18.949: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_712d0dd3-6c1a-4e1f-b3cc-88b0c22b6924 became leader Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-fths2 to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 4.452281102s (4.452289884s including waiting) Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container autoscaler Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container autoscaler Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container autoscaler Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fths2_kube-system(29242a59-ceae-4689-899f-a4b3bcf58fbe) Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container autoscaler Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container autoscaler Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container autoscaler Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fths2_kube-system(29242a59-ceae-4689-899f-a4b3bcf58fbe) Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-fths2 Jan 29 02:11:18.949: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6w15_kube-system(04a1e6edd54c1866478f181a6bf60b38) Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6w15_kube-system(04a1e6edd54c1866478f181a6bf60b38) Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-s51h_kube-system(2451b12f9e04e1f8e16fde66c2622fcd) Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:11:18.949: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container kube-proxy Jan 29 02:11:18.949: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:11:18.949: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 02:11:18.949: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 02:11:18.949: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 02:11:18.949: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 02:11:18.949: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_a9b313b0-f9fa-43de-b979-0958c05e1287 became leader Jan 29 02:11:18.949: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ecac3899-f709-4f43-824f-37faa839889c became leader Jan 29 02:11:18.949: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_460317a8-6d35-4656-87b9-0d8d3533477a became leader Jan 29 02:11:18.949: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_627862b6-098a-451d-a466-095484f8ed41 became leader Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-9bf57 to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 573.484189ms (573.492084ms including waiting) Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container default-http-backend Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container default-http-backend Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container default-http-backend Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container default-http-backend Jan 29 02:11:18.949: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-9bf57 Jan 29 02:11:18.949: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 02:11:18.949: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 02:11:18.949: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 02:11:18.949: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 02:11:18.949: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 02:11:18.949: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 02:11:18.949: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bff8h to bootstrap-e2e-minion-group-s51h Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 737.160338ms (737.179651ms including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.876782326s (1.876796204s including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bv2w9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 680.977248ms (680.991364ms including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.818844362s (1.818852935s including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pn2qm to bootstrap-e2e-minion-group-7c3d Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 679.514836ms (679.523319ms including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.788401445s (1.788433466s including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qnhsn to bootstrap-e2e-master Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 888.975253ms (888.981818ms including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.94842067s (1.948435203s including waiting) Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-qnhsn Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-pn2qm Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bff8h Jan 29 02:11:18.949: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bv2w9 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-tj5j9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.279253505s (2.279262122s including waiting) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.794216432s (3.794249509s including waiting) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-kkpk2 to bootstrap-e2e-minion-group-7c3d Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.249697964s (1.249709924s including waiting) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 964.990126ms (965.003136ms including waiting) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": context deadline exceeded Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server-nanny Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-kkpk2_kube-system(479216da-5769-49ec-9587-0666568c1790) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-kkpk2_kube-system(479216da-5769-49ec-9587-0666568c1790) Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-kkpk2 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 02:11:18.949: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-6w15 Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.228985526s (2.228994351s including waiting) Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container volume-snapshot-controller Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container volume-snapshot-controller Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container volume-snapshot-controller Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f15bbfbe-0efc-4a1b-ab62-e07fa18067f5) Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container volume-snapshot-controller Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container volume-snapshot-controller Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container volume-snapshot-controller Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f15bbfbe-0efc-4a1b-ab62-e07fa18067f5) Jan 29 02:11:18.949: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:11:18.949 (52ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:11:18.949 Jan 29 02:11:18.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:11:18.992 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:11:18.992 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:11:18.992 STEP: Collecting events from namespace "reboot-9826". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 02:11:18.992 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 02:11:19.034 Jan 29 02:11:19.076: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 02:11:19.076: INFO: Jan 29 02:11:19.119: INFO: Logging node info for node bootstrap-e2e-master Jan 29 02:11:19.161: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 09b38bdb-4830-432f-941a-7f47d2e4cb82 1803 0 2023-01-29 01:56:15 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 01:56:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 02:07:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:13 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:13 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:13 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:07:13 +0000 UTC,LastTransitionTime:2023-01-29 01:56:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.48.38,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817944af0c35e596144cbe0c39ece004,SystemUUID:817944af-0c35-e596-144c-be0c39ece004,BootID:10741312-523c-4032-96d6-5f4f987f3139,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:11:19.162: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 02:11:19.206: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 02:11:19.262: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 01:55:30 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container etcd-container ready: true, restart count 1 Jan 29 02:11:19.262: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container etcd-container ready: true, restart count 0 Jan 29 02:11:19.262: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container konnectivity-server-container ready: true, restart count 1 Jan 29 02:11:19.262: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container kube-controller-manager ready: false, restart count 6 Jan 29 02:11:19.262: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 01:55:48 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container l7-lb-controller ready: true, restart count 5 Jan 29 02:11:19.262: INFO: metadata-proxy-v0.1-qnhsn started at 2023-01-29 01:56:48 +0000 UTC (0+2 container statuses recorded) Jan 29 02:11:19.262: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 02:11:19.262: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 02:11:19.262: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container kube-apiserver ready: true, restart count 2 Jan 29 02:11:19.262: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container kube-scheduler ready: true, restart count 3 Jan 29 02:11:19.262: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 01:55:48 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.262: INFO: Container kube-addon-manager ready: false, restart count 2 Jan 29 02:11:19.446: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 02:11:19.446: INFO: Logging node info for node bootstrap-e2e-minion-group-6w15 Jan 29 02:11:19.489: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6w15 1fb28d13-4bf7-48f6-87ef-e22ff445a0fa 2107 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6w15 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 02:07:06 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 02:09:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-6w15,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:09:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:05 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:05 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:05 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:07:05 +0000 UTC,LastTransitionTime:2023-01-29 02:07:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.188.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4953e80002e138ed6b9c633aa1bea962,SystemUUID:4953e800-02e1-38ed-6b9c-633aa1bea962,BootID:de7cc9dc-cf41-49bc-9f0a-238c12b78432,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:11:19.490: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6w15 Jan 29 02:11:19.535: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6w15 Jan 29 02:11:19.638: INFO: volume-snapshot-controller-0 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.638: INFO: Container volume-snapshot-controller ready: false, restart count 7 Jan 29 02:11:19.638: INFO: coredns-6846b5b5f-2nvv4 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.638: INFO: Container coredns ready: true, restart count 4 Jan 29 02:11:19.638: INFO: metadata-proxy-v0.1-bv2w9 started at 2023-01-29 01:56:21 +0000 UTC (0+2 container statuses recorded) Jan 29 02:11:19.638: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 02:11:19.638: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 02:11:19.638: INFO: konnectivity-agent-x4gbp started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.638: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 29 02:11:19.638: INFO: kube-proxy-bootstrap-e2e-minion-group-6w15 started at 2023-01-29 01:56:20 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.638: INFO: Container kube-proxy ready: false, restart count 6 Jan 29 02:11:19.638: INFO: l7-default-backend-8549d69d99-9bf57 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.638: INFO: Container default-http-backend ready: true, restart count 2 Jan 29 02:11:19.638: INFO: kube-dns-autoscaler-5f6455f985-fths2 started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:19.638: INFO: Container autoscaler ready: true, restart count 5 Jan 29 02:11:19.825: INFO: Latency metrics for node bootstrap-e2e-minion-group-6w15 Jan 29 02:11:19.825: INFO: Logging node info for node bootstrap-e2e-minion-group-7c3d Jan 29 02:11:19.869: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7c3d 8e1fb573-c544-42e8-afb6-9489bf273e1f 2069 0 2023-01-29 01:56:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7c3d kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:03:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-29 02:07:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 02:09:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-7c3d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:07:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 02:04:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.28.1,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e82fc84d3d165f0af5fb24e7309ec0f6,SystemUUID:e82fc84d-3d16-5f0a-f5fb-24e7309ec0f6,BootID:d8228130-72eb-4a47-9a62-918a765d9db2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:11:19.870: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7c3d Jan 29 02:11:19.920: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7c3d Jan 29 02:11:20.027: INFO: kube-proxy-bootstrap-e2e-minion-group-7c3d started at 2023-01-29 02:03:34 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:20.027: INFO: Container kube-proxy ready: true, restart count 5 Jan 29 02:11:20.027: INFO: metadata-proxy-v0.1-pn2qm started at 2023-01-29 01:56:19 +0000 UTC (0+2 container statuses recorded) Jan 29 02:11:20.027: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 02:11:20.027: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 02:11:20.027: INFO: konnectivity-agent-rw7fw started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:20.027: INFO: Container konnectivity-agent ready: true, restart count 4 Jan 29 02:11:20.027: INFO: metrics-server-v0.5.2-867b8754b9-kkpk2 started at 2023-01-29 01:56:57 +0000 UTC (0+2 container statuses recorded) Jan 29 02:11:20.027: INFO: Container metrics-server ready: false, restart count 7 Jan 29 02:11:20.027: INFO: Container metrics-server-nanny ready: false, restart count 6 Jan 29 02:11:20.212: INFO: Latency metrics for node bootstrap-e2e-minion-group-7c3d Jan 29 02:11:20.212: INFO: Logging node info for node bootstrap-e2e-minion-group-s51h Jan 29 02:11:20.255: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-s51h 889261a3-c23b-4a70-8491-293cc30164ed 2112 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-s51h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 02:07:06 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 02:09:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-s51h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:09:57 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:06 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:06 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:07:06 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:07:06 +0000 UTC,LastTransitionTime:2023-01-29 02:07:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.145.127.28,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e84ea8c5f84b48682cb3668f2d7a776c,SystemUUID:e84ea8c5-f84b-4868-2cb3-668f2d7a776c,BootID:d00f00b1-34f8-4b2c-87f8-05ec98efeca6,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:11:20.256: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-s51h Jan 29 02:11:20.302: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-s51h Jan 29 02:11:20.392: INFO: coredns-6846b5b5f-sch2n started at 2023-01-29 01:56:42 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:20.392: INFO: Container coredns ready: true, restart count 5 Jan 29 02:11:20.392: INFO: kube-proxy-bootstrap-e2e-minion-group-s51h started at 2023-01-29 01:56:20 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:20.392: INFO: Container kube-proxy ready: true, restart count 6 Jan 29 02:11:20.392: INFO: metadata-proxy-v0.1-bff8h started at 2023-01-29 01:56:21 +0000 UTC (0+2 container statuses recorded) Jan 29 02:11:20.392: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 02:11:20.392: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 02:11:20.392: INFO: konnectivity-agent-krs9s started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:11:20.392: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 29 02:11:20.563: INFO: Latency metrics for node bootstrap-e2e-minion-group-s51h END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:11:20.563 (1.57s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:11:20.563 (1.57s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:11:20.563 STEP: Destroying namespace "reboot-9826" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 02:11:20.563 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:11:20.607 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:11:20.608 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:11:20.608 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 02:15:46.169from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 02:11:20.654 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 02:11:20.655 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 02:11:20.655 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 02:11:20.655 Jan 29 02:11:20.655: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 02:11:20.657 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 02:13:16.609 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 02:13:16.702 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 02:13:16.783 (1m56.129s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 02:13:16.783 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 02:13:16.784 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 02:13:16.784 Jan 29 02:13:16.879: INFO: Getting bootstrap-e2e-minion-group-6w15 Jan 29 02:13:16.880: INFO: Getting bootstrap-e2e-minion-group-s51h Jan 29 02:13:16.880: INFO: Getting bootstrap-e2e-minion-group-7c3d Jan 29 02:13:16.954: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7c3d condition Ready to be true Jan 29 02:13:16.955: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6w15 condition Ready to be true Jan 29 02:13:16.955: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-s51h condition Ready to be true Jan 29 02:13:16.996: INFO: Node bootstrap-e2e-minion-group-7c3d has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-7c3d metadata-proxy-v0.1-pn2qm] Jan 29 02:13:16.996: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-7c3d metadata-proxy-v0.1-pn2qm] Jan 29 02:13:16.996: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-pn2qm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:16.996: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7c3d" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:16.999: INFO: Node bootstrap-e2e-minion-group-s51h has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-bff8h kube-proxy-bootstrap-e2e-minion-group-s51h] Jan 29 02:13:16.999: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-bff8h kube-proxy-bootstrap-e2e-minion-group-s51h] Jan 29 02:13:16.999: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-s51h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:17.000: INFO: Node bootstrap-e2e-minion-group-6w15 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9 volume-snapshot-controller-0] Jan 29 02:13:17.000: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9 volume-snapshot-controller-0] Jan 29 02:13:17.000: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:17.000: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bff8h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:17.000: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-fths2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:17.000: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6w15" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:17.000: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bv2w9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:17.039: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d": Phase="Running", Reason="", readiness=false. Elapsed: 42.165862ms Jan 29 02:13:17.039: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-7c3d' on 'bootstrap-e2e-minion-group-7c3d' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC }] Jan 29 02:13:17.039: INFO: Pod "metadata-proxy-v0.1-pn2qm": Phase="Running", Reason="", readiness=true. Elapsed: 42.557094ms Jan 29 02:13:17.039: INFO: Pod "metadata-proxy-v0.1-pn2qm" satisfied condition "running and ready, or succeeded" Jan 29 02:13:17.045: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 45.008619ms Jan 29 02:13:17.045: INFO: Pod "kube-dns-autoscaler-5f6455f985-fths2": Phase="Running", Reason="", readiness=true. Elapsed: 44.828765ms Jan 29 02:13:17.045: INFO: Pod "kube-dns-autoscaler-5f6455f985-fths2" satisfied condition "running and ready, or succeeded" Jan 29 02:13:17.045: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:17.046: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s51h": Phase="Running", Reason="", readiness=true. Elapsed: 46.550423ms Jan 29 02:13:17.046: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s51h" satisfied condition "running and ready, or succeeded" Jan 29 02:13:17.046: INFO: Pod "metadata-proxy-v0.1-bff8h": Phase="Running", Reason="", readiness=true. Elapsed: 46.037399ms Jan 29 02:13:17.046: INFO: Pod "metadata-proxy-v0.1-bff8h" satisfied condition "running and ready, or succeeded" Jan 29 02:13:17.046: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-bff8h kube-proxy-bootstrap-e2e-minion-group-s51h] Jan 29 02:13:17.046: INFO: Getting external IP address for bootstrap-e2e-minion-group-s51h Jan 29 02:13:17.046: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-s51h(34.145.127.28:22) Jan 29 02:13:17.046: INFO: Pod "metadata-proxy-v0.1-bv2w9": Phase="Running", Reason="", readiness=true. Elapsed: 46.458945ms Jan 29 02:13:17.046: INFO: Pod "metadata-proxy-v0.1-bv2w9" satisfied condition "running and ready, or succeeded" Jan 29 02:13:17.047: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6w15": Phase="Running", Reason="", readiness=true. Elapsed: 46.95706ms Jan 29 02:13:17.047: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6w15" satisfied condition "running and ready, or succeeded" Jan 29 02:13:17.569: INFO: ssh prow@34.145.127.28:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 02:13:17.569: INFO: ssh prow@34.145.127.28:22: stdout: "" Jan 29 02:13:17.569: INFO: ssh prow@34.145.127.28:22: stderr: "" Jan 29 02:13:17.569: INFO: ssh prow@34.145.127.28:22: exit code: 0 Jan 29 02:13:17.569: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-s51h condition Ready to be false Jan 29 02:13:17.611: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:19.110: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d": Phase="Running", Reason="", readiness=false. Elapsed: 2.113066398s Jan 29 02:13:19.110: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.110018255s Jan 29 02:13:19.110: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-7c3d' on 'bootstrap-e2e-minion-group-7c3d' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC }] Jan 29 02:13:19.110: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:19.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:21.081: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d": Phase="Running", Reason="", readiness=false. Elapsed: 4.084947441s Jan 29 02:13:21.082: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-7c3d' on 'bootstrap-e2e-minion-group-7c3d' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC }] Jan 29 02:13:21.089: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.089204084s Jan 29 02:13:21.089: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:21.700: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:23.091: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d": Phase="Running", Reason="", readiness=false. Elapsed: 6.094706566s Jan 29 02:13:23.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-7c3d' on 'bootstrap-e2e-minion-group-7c3d' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC }] Jan 29 02:13:23.095: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.09568861s Jan 29 02:13:23.095: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:23.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:25.097: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d": Phase="Running", Reason="", readiness=true. Elapsed: 8.100423314s Jan 29 02:13:25.097: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d" satisfied condition "running and ready, or succeeded" Jan 29 02:13:25.097: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-7c3d metadata-proxy-v0.1-pn2qm] Jan 29 02:13:25.097: INFO: Getting external IP address for bootstrap-e2e-minion-group-7c3d Jan 29 02:13:25.097: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-7c3d(35.247.28.1:22) Jan 29 02:13:25.102: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.10228584s Jan 29 02:13:25.102: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:25.617: INFO: ssh prow@35.247.28.1:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 02:13:25.617: INFO: ssh prow@35.247.28.1:22: stdout: "" Jan 29 02:13:25.617: INFO: ssh prow@35.247.28.1:22: stderr: "" Jan 29 02:13:25.617: INFO: ssh prow@35.247.28.1:22: exit code: 0 Jan 29 02:13:25.617: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-7c3d condition Ready to be false Jan 29 02:13:25.659: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:25.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:27.091: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.091494679s Jan 29 02:13:27.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:27.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:27.831: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:29.089: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.089727183s Jan 29 02:13:29.089: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:29.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:29.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:31.093: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.093132842s Jan 29 02:13:31.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:31.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:31.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:33.091: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.09167165s Jan 29 02:13:33.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:33.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:33.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:35.101: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.101507263s Jan 29 02:13:35.101: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:35.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:36.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:37.091: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.090967682s Jan 29 02:13:37.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:37.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:38.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:39.091: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.091499469s Jan 29 02:13:39.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:39.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:40.096: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:41.091: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.091777615s Jan 29 02:13:41.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:42.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:42.139: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:43.091: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.091088276s Jan 29 02:13:43.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:44.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:44.182: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:45.096: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 28.09641153s Jan 29 02:13:45.096: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 02:13:45.096: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9 volume-snapshot-controller-0] Jan 29 02:13:45.096: INFO: Getting external IP address for bootstrap-e2e-minion-group-6w15 Jan 29 02:13:45.096: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-6w15(35.233.188.19:22) Jan 29 02:13:45.621: INFO: ssh prow@35.233.188.19:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 02:13:45.621: INFO: ssh prow@35.233.188.19:22: stdout: "" Jan 29 02:13:45.621: INFO: ssh prow@35.233.188.19:22: stderr: "" Jan 29 02:13:45.621: INFO: ssh prow@35.233.188.19:22: exit code: 0 Jan 29 02:13:45.621: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6w15 condition Ready to be false Jan 29 02:13:45.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:46.094: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:46.225: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:47.709: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:48.138: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:48.267: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:49.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:50.182: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:50.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:51.798: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:52.225: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:52.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:53.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:54.268: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:54.397: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:55.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:56.310: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:56.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:57.928: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:58.353: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:58.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:59.973: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:00.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:00.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:02.015: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:02.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:02.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:04.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:04.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:04.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:06.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:06.527: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:06.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:08.143: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:08.571: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:08.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:10.187: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:10.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:10.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:12.231: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:12.656: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:12.784: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:14.274: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:14.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:14.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:16.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:16.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:16.872: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:18.359: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:18.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:18.915: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:20.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:20.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:20.958: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:22.445: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:22.872: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:23.001: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:24.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:24.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:25.046: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:26.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:26.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:27.090: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:28.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:29.005: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:29.134: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:30.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:31.054: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:31.177: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:32.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:33.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:33.220: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:34.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:35.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:35.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:36.748: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:37.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:37.306: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:38.791: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:39.227: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:39.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:40.833: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:41.270: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:41.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:42.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:43.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:43.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:44.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:45.356: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:45.481: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:46.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:47.399: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:47.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:49.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:49.442: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:49.571: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:51.053: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:51.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:51.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:53.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:53.546: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:53.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:55.137: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:55.589: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:55.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:57.180: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:57.632: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:57.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:59.223: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:59.676: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:59.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:01.266: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:01.719: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:01.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:03.309: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:03.762: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:03.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:05.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:05.804: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:05.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:07.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:07.847: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:07.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:09.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:09.892: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:10.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:11.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:11.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:12.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:13.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:13.978: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:14.092: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:15.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:16.020: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:16.136: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:17.608: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:18.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:18.136: INFO: Node bootstrap-e2e-minion-group-s51h didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:15:19.654: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:20.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:21.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:22.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:23.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:24.193: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:25.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:26.193: INFO: Node bootstrap-e2e-minion-group-7c3d didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:15:27.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:29.869: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:31.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:33.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:35.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:38.040: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:40.083: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:42.125: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:44.169: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:46.169: INFO: Node bootstrap-e2e-minion-group-6w15 didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:15:46.169: INFO: Node bootstrap-e2e-minion-group-6w15 failed reboot test. Jan 29 02:15:46.169: INFO: Node bootstrap-e2e-minion-group-7c3d failed reboot test. Jan 29 02:15:46.169: INFO: Node bootstrap-e2e-minion-group-s51h failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 02:15:46.169 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 02:15:46.169 (2m29.386s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:15:46.169 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 02:15:46.169 Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-2nvv4 to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.229909205s (4.229917066s including waiting) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-2nvv4_kube-system(c5a7c76e-33f7-4271-a7f7-8f4b6013857d) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.18:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.24:8080/health": dial tcp 10.64.3.24:8080: connect: connection refused Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-2nvv4_kube-system(c5a7c76e-33f7-4271-a7f7-8f4b6013857d) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.33:8181/ready": dial tcp 10.64.3.33:8181: connect: connection refused Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-sch2n to bootstrap-e2e-minion-group-s51h Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 968.405842ms (968.417139ms including waiting) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-sch2n_kube-system(0ca61b79-17d9-42ef-bece-365ae3a67989) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-2nvv4 Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-sch2n Jan 29 02:15:46.219: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 02:15:46.219: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 02:15:46.219: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 02:15:46.219: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 02:15:46.219: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 02:15:46.219: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 02:15:46.219: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d580 became leader Jan 29 02:15:46.219: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b84f3 became leader Jan 29 02:15:46.219: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1f6a8 became leader Jan 29 02:15:46.219: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_d2447 became leader Jan 29 02:15:46.219: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_2af04 became leader Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-krs9s to bootstrap-e2e-minion-group-s51h Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 589.41049ms (589.437215ms including waiting) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-krs9s_kube-system(b4621953-3c0a-4ce4-9765-0425f1520b19) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rw7fw to bootstrap-e2e-minion-group-7c3d Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 627.397814ms (627.417417ms including waiting) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rw7fw_kube-system(6c6104fa-8a94-4417-b2d9-dbd47d6240f2) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-x4gbp to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.54378487s (2.543795192s including waiting) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-x4gbp_kube-system(5cc4536d-8554-405a-ac44-b9cd0b3e7168) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-x4gbp_kube-system(5cc4536d-8554-405a-ac44-b9cd0b3e7168) Jan 29 02:15:46.219: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rw7fw Jan 29 02:15:46.219: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-x4gbp Jan 29 02:15:46.219: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-krs9s Jan 29 02:15:46.219: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 02:15:46.219: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 02:15:46.219: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 02:15:46.219: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 02:15:46.219: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 02:15:46.219: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 02:15:46.219: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 02:15:46.219: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 02:15:46.219: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 02:15:46.219: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:15:46.219: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:15:46.219: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 02:15:46.219: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 02:15:46.219: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 02:15:46.219: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 02:15:46.219: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8c88f9f3-0fcf-4820-9f5f-5ee5c968f50d became leader Jan 29 02:15:46.219: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_e5ddf3f0-26c9-4d3b-ba00-8f32b5849ba5 became leader Jan 29 02:15:46.219: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f4908364-bab0-42a0-b122-c2caa2e85f9f became leader Jan 29 02:15:46.219: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_712d0dd3-6c1a-4e1f-b3cc-88b0c22b6924 became leader Jan 29 02:15:46.219: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_9b60ae0c-9c68-4eff-a04f-db8240d12112 became leader Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-fths2 to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 4.452281102s (4.452289884s including waiting) Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container autoscaler Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container autoscaler Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container autoscaler Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fths2_kube-system(29242a59-ceae-4689-899f-a4b3bcf58fbe) Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container autoscaler Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container autoscaler Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container autoscaler Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fths2_kube-system(29242a59-ceae-4689-899f-a4b3bcf58fbe) Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-fths2 Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6w15_kube-system(04a1e6edd54c1866478f181a6bf60b38) Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6w15_kube-system(04a1e6edd54c1866478f181a6bf60b38) Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-s51h_kube-system(2451b12f9e04e1f8e16fde66c2622fcd) Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 02:15:46.219: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 02:15:46.219: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 02:15:46.219: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 02:15:46.219: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_a9b313b0-f9fa-43de-b979-0958c05e1287 became leader Jan 29 02:15:46.219: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ecac3899-f709-4f43-824f-37faa839889c became leader Jan 29 02:15:46.219: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_460317a8-6d35-4656-87b9-0d8d3533477a became leader Jan 29 02:15:46.219: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_627862b6-098a-451d-a466-095484f8ed41 became leader Jan 29 02:15:46.219: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_caef08d2-a89e-4d0b-bfa2-93ded991bebe became leader Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-9bf57 to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 573.484189ms (573.492084ms including waiting) Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container default-http-backend Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container default-http-backend Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container default-http-backend Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container default-http-backend Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-9bf57 Jan 29 02:15:46.219: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 02:15:46.219: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 02:15:46.219: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 02:15:46.219: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 02:15:46.219: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 02:15:46.219: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 02:15:46.219: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bff8h to bootstrap-e2e-minion-group-s51h Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 737.160338ms (737.179651ms including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.876782326s (1.876796204s including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bv2w9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 680.977248ms (680.991364ms including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.818844362s (1.818852935s including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pn2qm to bootstrap-e2e-minion-group-7c3d Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 679.514836ms (679.523319ms including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.788401445s (1.788433466s including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qnhsn to bootstrap-e2e-master Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 888.975253ms (888.981818ms including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.94842067s (1.948435203s including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-qnhsn Jan 29 02:15:46.220: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-pn2qm Jan 29 02:15:46.220: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bff8h Jan 29 02:15:46.220: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bv2w9 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-tj5j9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.279253505s (2.279262122s including waiting) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.794216432s (3.794249509s including waiting) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-kkpk2 to bootstrap-e2e-minion-group-7c3d Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.249697964s (1.249709924s including waiting) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 964.990126ms (965.003136ms including waiting) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": context deadline exceeded Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-kkpk2_kube-system(479216da-5769-49ec-9587-0666568c1790) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-kkpk2_kube-system(479216da-5769-49ec-9587-0666568c1790) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-kkpk2 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.228985526s (2.228994351s including waiting) Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container volume-snapshot-controller Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container volume-snapshot-controller Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container volume-snapshot-controller Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f15bbfbe-0efc-4a1b-ab62-e07fa18067f5) Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container volume-snapshot-controller Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container volume-snapshot-controller Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container volume-snapshot-controller Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f15bbfbe-0efc-4a1b-ab62-e07fa18067f5) Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:15:46.22 (50ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:15:46.22 Jan 29 02:15:46.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:15:46.263 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 02:15:46.263 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 02:15:46.263 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:15:46.263 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:15:46.263 STEP: Collecting events from namespace "reboot-9247". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 02:15:46.263 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 02:15:46.305 Jan 29 02:15:46.346: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 02:15:46.346: INFO: Jan 29 02:15:46.389: INFO: Logging node info for node bootstrap-e2e-master Jan 29 02:15:46.431: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 09b38bdb-4830-432f-941a-7f47d2e4cb82 2367 0 2023-01-29 01:56:15 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 01:56:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 02:12:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:20 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:20 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:20 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:12:20 +0000 UTC,LastTransitionTime:2023-01-29 01:56:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.48.38,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817944af0c35e596144cbe0c39ece004,SystemUUID:817944af-0c35-e596-144c-be0c39ece004,BootID:10741312-523c-4032-96d6-5f4f987f3139,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:15:46.432: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 02:15:46.477: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 02:15:46.520: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Jan 29 02:15:46.520: INFO: Logging node info for node bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.561: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6w15 1fb28d13-4bf7-48f6-87ef-e22ff445a0fa 2650 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6w15 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 02:12:10 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 02:15:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-6w15,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:10 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:10 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:10 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:12:10 +0000 UTC,LastTransitionTime:2023-01-29 02:07:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.188.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4953e80002e138ed6b9c633aa1bea962,SystemUUID:4953e800-02e1-38ed-6b9c-633aa1bea962,BootID:de7cc9dc-cf41-49bc-9f0a-238c12b78432,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:15:46.562: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.606: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.650: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6w15: error trying to reach service: No agent available Jan 29 02:15:46.650: INFO: Logging node info for node bootstrap-e2e-minion-group-7c3d Jan 29 02:15:46.692: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7c3d 8e1fb573-c544-42e8-afb6-9489bf273e1f 2342 0 2023-01-29 01:56:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7c3d kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:03:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 02:09:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 02:12:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-7c3d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 02:04:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.28.1,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e82fc84d3d165f0af5fb24e7309ec0f6,SystemUUID:e82fc84d-3d16-5f0a-f5fb-24e7309ec0f6,BootID:d8228130-72eb-4a47-9a62-918a765d9db2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:15:46.692: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7c3d Jan 29 02:15:46.737: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7c3d Jan 29 02:15:46.781: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-7c3d: error trying to reach service: No agent available Jan 29 02:15:46.781: INFO: Logging node info for node bootstrap-e2e-minion-group-s51h Jan 29 02:15:46.823: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-s51h 889261a3-c23b-4a70-8491-293cc30164ed 2629 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-s51h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 02:12:07 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 02:14:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-s51h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:12:07 +0000 UTC,LastTransitionTime:2023-01-29 02:07:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.145.127.28,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e84ea8c5f84b48682cb3668f2d7a776c,SystemUUID:e84ea8c5-f84b-4868-2cb3-668f2d7a776c,BootID:d00f00b1-34f8-4b2c-87f8-05ec98efeca6,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:15:46.823: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-s51h Jan 29 02:15:46.868: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-s51h Jan 29 02:15:46.912: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-s51h: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:15:46.912 (649ms) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:15:46.912 (649ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:15:46.912 STEP: Destroying namespace "reboot-9247" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 02:15:46.912 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:15:46.955 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:15:46.956 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:15:46.956 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 02:15:46.169from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 02:11:20.654 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 02:11:20.655 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 02:11:20.655 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 02:11:20.655 Jan 29 02:11:20.655: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 02:11:20.657 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 02:13:16.609 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 02:13:16.702 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 02:13:16.783 (1m56.129s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 02:13:16.783 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 02:13:16.784 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 02:13:16.784 Jan 29 02:13:16.879: INFO: Getting bootstrap-e2e-minion-group-6w15 Jan 29 02:13:16.880: INFO: Getting bootstrap-e2e-minion-group-s51h Jan 29 02:13:16.880: INFO: Getting bootstrap-e2e-minion-group-7c3d Jan 29 02:13:16.954: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7c3d condition Ready to be true Jan 29 02:13:16.955: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6w15 condition Ready to be true Jan 29 02:13:16.955: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-s51h condition Ready to be true Jan 29 02:13:16.996: INFO: Node bootstrap-e2e-minion-group-7c3d has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-7c3d metadata-proxy-v0.1-pn2qm] Jan 29 02:13:16.996: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-7c3d metadata-proxy-v0.1-pn2qm] Jan 29 02:13:16.996: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-pn2qm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:16.996: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7c3d" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:16.999: INFO: Node bootstrap-e2e-minion-group-s51h has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-bff8h kube-proxy-bootstrap-e2e-minion-group-s51h] Jan 29 02:13:16.999: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-bff8h kube-proxy-bootstrap-e2e-minion-group-s51h] Jan 29 02:13:16.999: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-s51h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:17.000: INFO: Node bootstrap-e2e-minion-group-6w15 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9 volume-snapshot-controller-0] Jan 29 02:13:17.000: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9 volume-snapshot-controller-0] Jan 29 02:13:17.000: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:17.000: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bff8h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:17.000: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-fths2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:17.000: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6w15" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:17.000: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bv2w9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:13:17.039: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d": Phase="Running", Reason="", readiness=false. Elapsed: 42.165862ms Jan 29 02:13:17.039: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-7c3d' on 'bootstrap-e2e-minion-group-7c3d' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC }] Jan 29 02:13:17.039: INFO: Pod "metadata-proxy-v0.1-pn2qm": Phase="Running", Reason="", readiness=true. Elapsed: 42.557094ms Jan 29 02:13:17.039: INFO: Pod "metadata-proxy-v0.1-pn2qm" satisfied condition "running and ready, or succeeded" Jan 29 02:13:17.045: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 45.008619ms Jan 29 02:13:17.045: INFO: Pod "kube-dns-autoscaler-5f6455f985-fths2": Phase="Running", Reason="", readiness=true. Elapsed: 44.828765ms Jan 29 02:13:17.045: INFO: Pod "kube-dns-autoscaler-5f6455f985-fths2" satisfied condition "running and ready, or succeeded" Jan 29 02:13:17.045: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:17.046: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s51h": Phase="Running", Reason="", readiness=true. Elapsed: 46.550423ms Jan 29 02:13:17.046: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s51h" satisfied condition "running and ready, or succeeded" Jan 29 02:13:17.046: INFO: Pod "metadata-proxy-v0.1-bff8h": Phase="Running", Reason="", readiness=true. Elapsed: 46.037399ms Jan 29 02:13:17.046: INFO: Pod "metadata-proxy-v0.1-bff8h" satisfied condition "running and ready, or succeeded" Jan 29 02:13:17.046: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-bff8h kube-proxy-bootstrap-e2e-minion-group-s51h] Jan 29 02:13:17.046: INFO: Getting external IP address for bootstrap-e2e-minion-group-s51h Jan 29 02:13:17.046: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-s51h(34.145.127.28:22) Jan 29 02:13:17.046: INFO: Pod "metadata-proxy-v0.1-bv2w9": Phase="Running", Reason="", readiness=true. Elapsed: 46.458945ms Jan 29 02:13:17.046: INFO: Pod "metadata-proxy-v0.1-bv2w9" satisfied condition "running and ready, or succeeded" Jan 29 02:13:17.047: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6w15": Phase="Running", Reason="", readiness=true. Elapsed: 46.95706ms Jan 29 02:13:17.047: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6w15" satisfied condition "running and ready, or succeeded" Jan 29 02:13:17.569: INFO: ssh prow@34.145.127.28:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 02:13:17.569: INFO: ssh prow@34.145.127.28:22: stdout: "" Jan 29 02:13:17.569: INFO: ssh prow@34.145.127.28:22: stderr: "" Jan 29 02:13:17.569: INFO: ssh prow@34.145.127.28:22: exit code: 0 Jan 29 02:13:17.569: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-s51h condition Ready to be false Jan 29 02:13:17.611: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:19.110: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d": Phase="Running", Reason="", readiness=false. Elapsed: 2.113066398s Jan 29 02:13:19.110: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.110018255s Jan 29 02:13:19.110: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-7c3d' on 'bootstrap-e2e-minion-group-7c3d' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC }] Jan 29 02:13:19.110: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:19.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:21.081: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d": Phase="Running", Reason="", readiness=false. Elapsed: 4.084947441s Jan 29 02:13:21.082: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-7c3d' on 'bootstrap-e2e-minion-group-7c3d' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC }] Jan 29 02:13:21.089: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.089204084s Jan 29 02:13:21.089: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:21.700: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:23.091: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d": Phase="Running", Reason="", readiness=false. Elapsed: 6.094706566s Jan 29 02:13:23.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-7c3d' on 'bootstrap-e2e-minion-group-7c3d' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:34 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:03:34 +0000 UTC }] Jan 29 02:13:23.095: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.09568861s Jan 29 02:13:23.095: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:23.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:25.097: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d": Phase="Running", Reason="", readiness=true. Elapsed: 8.100423314s Jan 29 02:13:25.097: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7c3d" satisfied condition "running and ready, or succeeded" Jan 29 02:13:25.097: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-7c3d metadata-proxy-v0.1-pn2qm] Jan 29 02:13:25.097: INFO: Getting external IP address for bootstrap-e2e-minion-group-7c3d Jan 29 02:13:25.097: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-7c3d(35.247.28.1:22) Jan 29 02:13:25.102: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.10228584s Jan 29 02:13:25.102: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:25.617: INFO: ssh prow@35.247.28.1:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 02:13:25.617: INFO: ssh prow@35.247.28.1:22: stdout: "" Jan 29 02:13:25.617: INFO: ssh prow@35.247.28.1:22: stderr: "" Jan 29 02:13:25.617: INFO: ssh prow@35.247.28.1:22: exit code: 0 Jan 29 02:13:25.617: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-7c3d condition Ready to be false Jan 29 02:13:25.659: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:25.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:27.091: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.091494679s Jan 29 02:13:27.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:27.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:27.831: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:29.089: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.089727183s Jan 29 02:13:29.089: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:29.749: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:29.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:31.093: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.093132842s Jan 29 02:13:31.093: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:31.792: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:31.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:33.091: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.09167165s Jan 29 02:13:33.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:33.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:33.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:35.101: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.101507263s Jan 29 02:13:35.101: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:35.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:36.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:37.091: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.090967682s Jan 29 02:13:37.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:37.919: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:38.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:39.091: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.091499469s Jan 29 02:13:39.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:39.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:40.096: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:41.091: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.091777615s Jan 29 02:13:41.092: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:42.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:42.139: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:43.091: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.091088276s Jan 29 02:13:43.091: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-6w15' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 02:12:23 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 01:56:32 +0000 UTC }] Jan 29 02:13:44.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:44.182: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:45.096: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 28.09641153s Jan 29 02:13:45.096: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 02:13:45.096: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9 volume-snapshot-controller-0] Jan 29 02:13:45.096: INFO: Getting external IP address for bootstrap-e2e-minion-group-6w15 Jan 29 02:13:45.096: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-6w15(35.233.188.19:22) Jan 29 02:13:45.621: INFO: ssh prow@35.233.188.19:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 02:13:45.621: INFO: ssh prow@35.233.188.19:22: stdout: "" Jan 29 02:13:45.621: INFO: ssh prow@35.233.188.19:22: stderr: "" Jan 29 02:13:45.621: INFO: ssh prow@35.233.188.19:22: exit code: 0 Jan 29 02:13:45.621: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6w15 condition Ready to be false Jan 29 02:13:45.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:46.094: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:46.225: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:47.709: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:48.138: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:48.267: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:49.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:50.182: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:50.311: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:51.798: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:52.225: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:52.354: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:53.841: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:54.268: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:54.397: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:55.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:56.310: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:56.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:57.928: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:58.353: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:58.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:13:59.973: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:00.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:00.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:02.015: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:02.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:02.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:04.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:04.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:04.612: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:06.100: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:06.527: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:06.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:08.143: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:08.571: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:08.699: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:10.187: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:10.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:10.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:12.231: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:12.656: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:12.784: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:14.274: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:14.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:14.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:16.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:16.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:16.872: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:18.359: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:18.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:18.915: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:20.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:20.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:20.958: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:22.445: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:22.872: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:23.001: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:24.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:24.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:25.046: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:26.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:26.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:27.090: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:28.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:29.005: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:29.134: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:30.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:31.054: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:31.177: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:32.660: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:33.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:33.220: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:34.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:35.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:35.263: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:36.748: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:37.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:37.306: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:38.791: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:39.227: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:39.349: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:40.833: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:41.270: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:41.392: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:42.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:43.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:43.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:44.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:45.356: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:45.481: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:46.965: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:47.399: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:47.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:49.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:49.442: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:49.571: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:51.053: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:51.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:51.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:53.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:53.546: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:53.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:55.137: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:55.589: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:55.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:57.180: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:57.632: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:57.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:59.223: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:59.676: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:14:59.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:01.266: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:01.719: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:01.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:03.309: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:03.762: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:03.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:05.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:05.804: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:05.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:07.395: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:07.847: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:07.963: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:09.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:09.892: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:10.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:11.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:11.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:12.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:13.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:13.978: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:14.092: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:15.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:16.020: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:16.136: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:17.608: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:18.063: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:18.136: INFO: Node bootstrap-e2e-minion-group-s51h didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:15:19.654: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:20.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:21.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:22.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:23.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:24.193: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:25.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:26.193: INFO: Node bootstrap-e2e-minion-group-7c3d didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:15:27.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:29.869: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:31.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:33.954: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:35.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:38.040: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:40.083: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:42.125: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:44.169: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:15:46.169: INFO: Node bootstrap-e2e-minion-group-6w15 didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:15:46.169: INFO: Node bootstrap-e2e-minion-group-6w15 failed reboot test. Jan 29 02:15:46.169: INFO: Node bootstrap-e2e-minion-group-7c3d failed reboot test. Jan 29 02:15:46.169: INFO: Node bootstrap-e2e-minion-group-s51h failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 02:15:46.169 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 02:15:46.169 (2m29.386s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:15:46.169 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 02:15:46.169 Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-2nvv4 to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.229909205s (4.229917066s including waiting) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-2nvv4_kube-system(c5a7c76e-33f7-4271-a7f7-8f4b6013857d) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.18:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.24:8080/health": dial tcp 10.64.3.24:8080: connect: connection refused Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-2nvv4_kube-system(c5a7c76e-33f7-4271-a7f7-8f4b6013857d) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.24:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.33:8181/ready": dial tcp 10.64.3.33:8181: connect: connection refused Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-sch2n to bootstrap-e2e-minion-group-s51h Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 968.405842ms (968.417139ms including waiting) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container coredns Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-sch2n_kube-system(0ca61b79-17d9-42ef-bece-365ae3a67989) Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-2nvv4 Jan 29 02:15:46.219: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-sch2n Jan 29 02:15:46.219: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 02:15:46.219: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 02:15:46.219: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 02:15:46.219: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 02:15:46.219: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 02:15:46.219: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 02:15:46.219: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d580 became leader Jan 29 02:15:46.219: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b84f3 became leader Jan 29 02:15:46.219: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1f6a8 became leader Jan 29 02:15:46.219: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_d2447 became leader Jan 29 02:15:46.219: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_2af04 became leader Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-krs9s to bootstrap-e2e-minion-group-s51h Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 589.41049ms (589.437215ms including waiting) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-krs9s_kube-system(b4621953-3c0a-4ce4-9765-0425f1520b19) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rw7fw to bootstrap-e2e-minion-group-7c3d Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 627.397814ms (627.417417ms including waiting) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rw7fw_kube-system(6c6104fa-8a94-4417-b2d9-dbd47d6240f2) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-x4gbp to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.54378487s (2.543795192s including waiting) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-x4gbp_kube-system(5cc4536d-8554-405a-ac44-b9cd0b3e7168) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container konnectivity-agent Jan 29 02:15:46.219: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-x4gbp_kube-system(5cc4536d-8554-405a-ac44-b9cd0b3e7168) Jan 29 02:15:46.219: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rw7fw Jan 29 02:15:46.219: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-x4gbp Jan 29 02:15:46.219: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-krs9s Jan 29 02:15:46.219: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 02:15:46.219: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 02:15:46.219: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 02:15:46.219: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 02:15:46.219: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 02:15:46.219: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 02:15:46.219: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 02:15:46.219: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 02:15:46.219: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 02:15:46.219: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 02:15:46.219: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:15:46.219: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:15:46.219: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 02:15:46.219: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 02:15:46.219: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 02:15:46.219: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 02:15:46.219: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8c88f9f3-0fcf-4820-9f5f-5ee5c968f50d became leader Jan 29 02:15:46.219: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_e5ddf3f0-26c9-4d3b-ba00-8f32b5849ba5 became leader Jan 29 02:15:46.219: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f4908364-bab0-42a0-b122-c2caa2e85f9f became leader Jan 29 02:15:46.219: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_712d0dd3-6c1a-4e1f-b3cc-88b0c22b6924 became leader Jan 29 02:15:46.219: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_9b60ae0c-9c68-4eff-a04f-db8240d12112 became leader Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-fths2 to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 4.452281102s (4.452289884s including waiting) Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container autoscaler Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container autoscaler Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container autoscaler Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fths2_kube-system(29242a59-ceae-4689-899f-a4b3bcf58fbe) Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container autoscaler Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container autoscaler Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container autoscaler Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fths2_kube-system(29242a59-ceae-4689-899f-a4b3bcf58fbe) Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-fths2 Jan 29 02:15:46.219: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6w15_kube-system(04a1e6edd54c1866478f181a6bf60b38) Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6w15_kube-system(04a1e6edd54c1866478f181a6bf60b38) Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-s51h_kube-system(2451b12f9e04e1f8e16fde66c2622fcd) Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:15:46.219: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container kube-proxy Jan 29 02:15:46.219: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:15:46.219: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 02:15:46.219: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 02:15:46.219: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 02:15:46.219: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 02:15:46.219: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_a9b313b0-f9fa-43de-b979-0958c05e1287 became leader Jan 29 02:15:46.219: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ecac3899-f709-4f43-824f-37faa839889c became leader Jan 29 02:15:46.219: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_460317a8-6d35-4656-87b9-0d8d3533477a became leader Jan 29 02:15:46.219: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_627862b6-098a-451d-a466-095484f8ed41 became leader Jan 29 02:15:46.219: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_caef08d2-a89e-4d0b-bfa2-93ded991bebe became leader Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-9bf57 to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 573.484189ms (573.492084ms including waiting) Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container default-http-backend Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container default-http-backend Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container default-http-backend Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container default-http-backend Jan 29 02:15:46.219: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-9bf57 Jan 29 02:15:46.219: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 02:15:46.219: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 02:15:46.219: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 02:15:46.219: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 02:15:46.219: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 02:15:46.219: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 02:15:46.219: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bff8h to bootstrap-e2e-minion-group-s51h Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 737.160338ms (737.179651ms including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.876782326s (1.876796204s including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bv2w9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 680.977248ms (680.991364ms including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.818844362s (1.818852935s including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pn2qm to bootstrap-e2e-minion-group-7c3d Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 679.514836ms (679.523319ms including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.788401445s (1.788433466s including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qnhsn to bootstrap-e2e-master Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 888.975253ms (888.981818ms including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.94842067s (1.948435203s including waiting) Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 02:15:46.219: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-qnhsn Jan 29 02:15:46.220: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-pn2qm Jan 29 02:15:46.220: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bff8h Jan 29 02:15:46.220: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bv2w9 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-tj5j9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.279253505s (2.279262122s including waiting) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.794216432s (3.794249509s including waiting) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-kkpk2 to bootstrap-e2e-minion-group-7c3d Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.249697964s (1.249709924s including waiting) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 964.990126ms (965.003136ms including waiting) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": context deadline exceeded Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server-nanny Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-kkpk2_kube-system(479216da-5769-49ec-9587-0666568c1790) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-kkpk2_kube-system(479216da-5769-49ec-9587-0666568c1790) Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-kkpk2 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 02:15:46.220: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.228985526s (2.228994351s including waiting) Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container volume-snapshot-controller Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container volume-snapshot-controller Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container volume-snapshot-controller Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f15bbfbe-0efc-4a1b-ab62-e07fa18067f5) Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container volume-snapshot-controller Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container volume-snapshot-controller Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container volume-snapshot-controller Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f15bbfbe-0efc-4a1b-ab62-e07fa18067f5) Jan 29 02:15:46.220: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:15:46.22 (50ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:15:46.22 Jan 29 02:15:46.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:15:46.263 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 02:15:46.263 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 02:15:46.263 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:15:46.263 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:15:46.263 STEP: Collecting events from namespace "reboot-9247". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 02:15:46.263 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 02:15:46.305 Jan 29 02:15:46.346: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 02:15:46.346: INFO: Jan 29 02:15:46.389: INFO: Logging node info for node bootstrap-e2e-master Jan 29 02:15:46.431: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 09b38bdb-4830-432f-941a-7f47d2e4cb82 2367 0 2023-01-29 01:56:15 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 01:56:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 02:12:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:20 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:20 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:20 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:12:20 +0000 UTC,LastTransitionTime:2023-01-29 01:56:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.48.38,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817944af0c35e596144cbe0c39ece004,SystemUUID:817944af-0c35-e596-144c-be0c39ece004,BootID:10741312-523c-4032-96d6-5f4f987f3139,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:15:46.432: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 02:15:46.477: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 02:15:46.520: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Jan 29 02:15:46.520: INFO: Logging node info for node bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.561: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6w15 1fb28d13-4bf7-48f6-87ef-e22ff445a0fa 2650 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6w15 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 02:12:10 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 02:15:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-6w15,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:15:08 +0000 UTC,LastTransitionTime:2023-01-29 02:15:07 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:10 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:10 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:10 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:12:10 +0000 UTC,LastTransitionTime:2023-01-29 02:07:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.188.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4953e80002e138ed6b9c633aa1bea962,SystemUUID:4953e800-02e1-38ed-6b9c-633aa1bea962,BootID:de7cc9dc-cf41-49bc-9f0a-238c12b78432,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:15:46.562: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.606: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6w15 Jan 29 02:15:46.650: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6w15: error trying to reach service: No agent available Jan 29 02:15:46.650: INFO: Logging node info for node bootstrap-e2e-minion-group-7c3d Jan 29 02:15:46.692: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7c3d 8e1fb573-c544-42e8-afb6-9489bf273e1f 2342 0 2023-01-29 01:56:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7c3d kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:03:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 02:09:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 02:12:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-7c3d,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:12:03 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:09:38 +0000 UTC,LastTransitionTime:2023-01-29 02:04:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.28.1,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e82fc84d3d165f0af5fb24e7309ec0f6,SystemUUID:e82fc84d-3d16-5f0a-f5fb-24e7309ec0f6,BootID:d8228130-72eb-4a47-9a62-918a765d9db2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:15:46.692: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7c3d Jan 29 02:15:46.737: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7c3d Jan 29 02:15:46.781: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-7c3d: error trying to reach service: No agent available Jan 29 02:15:46.781: INFO: Logging node info for node bootstrap-e2e-minion-group-s51h Jan 29 02:15:46.823: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-s51h 889261a3-c23b-4a70-8491-293cc30164ed 2629 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-s51h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:07:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 02:12:07 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 02:14:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-s51h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:14:45 +0000 UTC,LastTransitionTime:2023-01-29 02:14:44 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:12:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:12:07 +0000 UTC,LastTransitionTime:2023-01-29 02:07:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.145.127.28,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e84ea8c5f84b48682cb3668f2d7a776c,SystemUUID:e84ea8c5-f84b-4868-2cb3-668f2d7a776c,BootID:d00f00b1-34f8-4b2c-87f8-05ec98efeca6,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:15:46.823: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-s51h Jan 29 02:15:46.868: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-s51h Jan 29 02:15:46.912: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-s51h: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:15:46.912 (649ms) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:15:46.912 (649ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:15:46.912 STEP: Destroying namespace "reboot-9247" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 02:15:46.912 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:15:46.955 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:15:46.956 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:15:46.956 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 02:05:35.536from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 02:02:34.801 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 02:02:34.801 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 02:02:34.801 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 02:02:34.801 Jan 29 02:02:34.801: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 02:02:34.802 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 02:03:33.343 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 02:03:33.424 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 02:03:33.526 (58.725s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 02:03:33.526 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 02:03:33.526 (0s) > Enter [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 02:03:33.526 Jan 29 02:03:33.591: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is false instead of true. Reason: KubeletNotReady, message: [PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized] Jan 29 02:03:33.642: INFO: Getting bootstrap-e2e-minion-group-6w15 Jan 29 02:03:33.642: INFO: Getting bootstrap-e2e-minion-group-s51h Jan 29 02:03:33.684: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6w15 condition Ready to be true Jan 29 02:03:33.714: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-s51h condition Ready to be true Jan 29 02:03:33.727: INFO: Node bootstrap-e2e-minion-group-6w15 has 4 assigned pods with no liveness probes: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9] Jan 29 02:03:33.727: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9] Jan 29 02:03:33.727: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bv2w9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:03:33.728: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:03:33.728: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6w15" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:03:33.728: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-fths2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:03:33.757: INFO: Node bootstrap-e2e-minion-group-s51h has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-s51h metadata-proxy-v0.1-bff8h] Jan 29 02:03:33.757: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-s51h metadata-proxy-v0.1-bff8h] Jan 29 02:03:33.757: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bff8h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:03:33.757: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-s51h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:03:33.770: INFO: Pod "metadata-proxy-v0.1-bv2w9": Phase="Running", Reason="", readiness=true. Elapsed: 42.855313ms Jan 29 02:03:33.770: INFO: Pod "metadata-proxy-v0.1-bv2w9" satisfied condition "running and ready, or succeeded" Jan 29 02:03:33.771: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 43.591887ms Jan 29 02:03:33.771: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 02:03:33.771: INFO: Pod "kube-dns-autoscaler-5f6455f985-fths2": Phase="Running", Reason="", readiness=true. Elapsed: 43.514235ms Jan 29 02:03:33.771: INFO: Pod "kube-dns-autoscaler-5f6455f985-fths2" satisfied condition "running and ready, or succeeded" Jan 29 02:03:33.772: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6w15": Phase="Running", Reason="", readiness=true. Elapsed: 44.667481ms Jan 29 02:03:33.772: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6w15" satisfied condition "running and ready, or succeeded" Jan 29 02:03:33.772: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9] Jan 29 02:03:33.772: INFO: Getting external IP address for bootstrap-e2e-minion-group-6w15 Jan 29 02:03:33.772: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-6w15(35.233.188.19:22) Jan 29 02:03:33.800: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s51h": Phase="Running", Reason="", readiness=true. Elapsed: 42.812329ms Jan 29 02:03:33.800: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s51h" satisfied condition "running and ready, or succeeded" Jan 29 02:03:33.800: INFO: Pod "metadata-proxy-v0.1-bff8h": Phase="Running", Reason="", readiness=true. Elapsed: 43.02646ms Jan 29 02:03:33.800: INFO: Pod "metadata-proxy-v0.1-bff8h" satisfied condition "running and ready, or succeeded" Jan 29 02:03:33.800: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-s51h metadata-proxy-v0.1-bff8h] Jan 29 02:03:33.800: INFO: Getting external IP address for bootstrap-e2e-minion-group-s51h Jan 29 02:03:33.800: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-s51h(34.145.127.28:22) Jan 29 02:03:34.305: INFO: ssh prow@35.233.188.19:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 02:03:34.305: INFO: ssh prow@35.233.188.19:22: stdout: "" Jan 29 02:03:34.305: INFO: ssh prow@35.233.188.19:22: stderr: "" Jan 29 02:03:34.305: INFO: ssh prow@35.233.188.19:22: exit code: 0 Jan 29 02:03:34.305: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6w15 condition Ready to be false Jan 29 02:03:34.344: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:34.354: INFO: ssh prow@34.145.127.28:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 02:03:34.354: INFO: ssh prow@34.145.127.28:22: stdout: "" Jan 29 02:03:34.354: INFO: ssh prow@34.145.127.28:22: stderr: "" Jan 29 02:03:34.354: INFO: ssh prow@34.145.127.28:22: exit code: 0 Jan 29 02:03:34.354: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-s51h condition Ready to be false Jan 29 02:03:34.393: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:36.384: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:36.434: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:38.425: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:38.475: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:40.464: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:40.515: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:42.505: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:42.556: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:44.546: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:44.596: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:46.587: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:46.636: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:48.628: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:48.677: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:50.669: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:50.717: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:52.710: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:52.758: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:54.750: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:54.798: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:56.791: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:56.838: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:58.831: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:58.878: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:00.871: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:00.919: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:02.911: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:02.959: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:04.951: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:04.998: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:06.992: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:07.039: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:09.032: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:09.080: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:11.071: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:11.120: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:13.112: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:13.161: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:15.152: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:15.201: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:17.191: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:17.241: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:19.232: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:19.281: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:21.274: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:21.321: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:23.313: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:23.362: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:25.355: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:25.403: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:27.395: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:27.443: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:29.436: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:29.483: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:31.476: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:31.523: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:38.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:38.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:40.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:40.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:42.277: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:42.277: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:44.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:44.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:46.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:46.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:48.417: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:48.417: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:50.464: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:50.464: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:52.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:52.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:54.575: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:54.575: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:56.621: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:56.621: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:58.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:58.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:00.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:00.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:02.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:02.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:04.809: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:04.809: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:06.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:06.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:08.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:08.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:10.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:10.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:12.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:12.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:15.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:15.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:17.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:17.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:19.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:19.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:21.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:21.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:23.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:23.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:25.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:25.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:27.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:27.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:29.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:29.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:31.489: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:31.489: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:33.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:33.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:35.535: INFO: Node bootstrap-e2e-minion-group-6w15 didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:05:35.535: INFO: Node bootstrap-e2e-minion-group-s51h didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:05:35.535: INFO: Node bootstrap-e2e-minion-group-6w15 failed reboot test. Jan 29 02:05:35.535: INFO: Node bootstrap-e2e-minion-group-s51h failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 02:05:35.536 < Exit [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 02:05:35.536 (2m2.01s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:05:35.536 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 02:05:35.536 Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-2nvv4 to bootstrap-e2e-minion-group-6w15 Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.229909205s (4.229917066s including waiting) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container coredns Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container coredns Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container coredns Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-2nvv4_kube-system(c5a7c76e-33f7-4271-a7f7-8f4b6013857d) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.18:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-sch2n to bootstrap-e2e-minion-group-s51h Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 968.405842ms (968.417139ms including waiting) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container coredns Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container coredns Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container coredns Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-2nvv4 Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-sch2n Jan 29 02:05:35.587: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 02:05:35.587: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 02:05:35.587: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 02:05:35.587: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 02:05:35.587: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 02:05:35.587: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 02:05:35.587: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d580 became leader Jan 29 02:05:35.587: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b84f3 became leader Jan 29 02:05:35.587: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1f6a8 became leader Jan 29 02:05:35.587: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_d2447 became leader Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-krs9s to bootstrap-e2e-minion-group-s51h Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 589.41049ms (589.437215ms including waiting) Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rw7fw to bootstrap-e2e-minion-group-7c3d Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 627.397814ms (627.417417ms including waiting) Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-x4gbp to bootstrap-e2e-minion-group-6w15 Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.54378487s (2.543795192s including waiting) Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-x4gbp_kube-system(5cc4536d-8554-405a-ac44-b9cd0b3e7168) Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rw7fw Jan 29 02:05:35.587: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-x4gbp Jan 29 02:05:35.587: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-krs9s Jan 29 02:05:35.587: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 02:05:35.587: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 02:05:35.587: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 02:05:35.587: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 02:05:35.587: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 02:05:35.587: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 02:05:35.587: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:05:35.587: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:05:35.587: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:05:35.587: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 02:05:35.587: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 02:05:35.587: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 02:05:35.587: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 02:05:35.587: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8c88f9f3-0fcf-4820-9f5f-5ee5c968f50d became leader Jan 29 02:05:35.587: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_e5ddf3f0-26c9-4d3b-ba00-8f32b5849ba5 became leader Jan 29 02:05:35.587: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f4908364-bab0-42a0-b122-c2caa2e85f9f became leader Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-fths2 to bootstrap-e2e-minion-group-6w15 Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 4.452281102s (4.452289884s including waiting) Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container autoscaler Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container autoscaler Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container autoscaler Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fths2_kube-system(29242a59-ceae-4689-899f-a4b3bcf58fbe) Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-fths2 Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6w15_kube-system(04a1e6edd54c1866478f181a6bf60b38) Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-s51h_kube-system(2451b12f9e04e1f8e16fde66c2622fcd) Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:05:35.587: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 02:05:35.587: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 02:05:35.587: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 02:05:35.587: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 02:05:35.587: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_a9b313b0-f9fa-43de-b979-0958c05e1287 became leader Jan 29 02:05:35.587: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ecac3899-f709-4f43-824f-37faa839889c became leader Jan 29 02:05:35.587: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_460317a8-6d35-4656-87b9-0d8d3533477a became leader Jan 29 02:05:35.587: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_627862b6-098a-451d-a466-095484f8ed41 became leader Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-9bf57 to bootstrap-e2e-minion-group-6w15 Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 573.484189ms (573.492084ms including waiting) Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container default-http-backend Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container default-http-backend Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-9bf57 Jan 29 02:05:35.587: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 02:05:35.587: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 02:05:35.587: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 02:05:35.587: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 02:05:35.587: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 02:05:35.587: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 02:05:35.587: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bff8h to bootstrap-e2e-minion-group-s51h Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 737.160338ms (737.179651ms including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.876782326s (1.876796204s including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bv2w9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 680.977248ms (680.991364ms including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.818844362s (1.818852935s including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pn2qm to bootstrap-e2e-minion-group-7c3d Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 679.514836ms (679.523319ms including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.788401445s (1.788433466s including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qnhsn to bootstrap-e2e-master Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 888.975253ms (888.981818ms including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.94842067s (1.948435203s including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-qnhsn Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-pn2qm Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bff8h Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bv2w9 Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-tj5j9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.279253505s (2.279262122s including waiting) Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.794216432s (3.794249509s including waiting) Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metrics-server-nanny Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metrics-server-nanny Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container metrics-server-nanny Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c-tj5j9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-tj5j9 Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: { } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-kkpk2 to bootstrap-e2e-minion-group-7c3d Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.249697964s (1.249709924s including waiting) Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 964.990126ms (965.003136ms including waiting) Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server-nanny Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server-nanny Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": dial tcp 10.64.1.3:10250: connect: connection refused Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "https://10.64.1.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.3:10250/readyz": context deadline exceeded Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container metrics-server-nanny Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metrics-server-nanny Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metrics-server-nanny Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9-kkpk2: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Readiness probe failed: Get "https://10.64.1.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-kkpk2 Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 02:05:35.587: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 02:05:35.587: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:05:35.587: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:05:35.587: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-6w15 Jan 29 02:05:35.587: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 02:05:35.587: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.228985526s (2.228994351s including waiting) Jan 29 02:05:35.587: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container volume-snapshot-controller Jan 29 02:05:35.587: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container volume-snapshot-controller Jan 29 02:05:35.587: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container volume-snapshot-controller Jan 29 02:05:35.587: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 02:05:35.587: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(f15bbfbe-0efc-4a1b-ab62-e07fa18067f5) Jan 29 02:05:35.587: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:05:35.587 (51ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:05:35.587 Jan 29 02:05:35.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 02:05:35.632 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 02:05:35.632 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 02:05:35.632 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:05:35.632 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:05:35.632 STEP: Collecting events from namespace "reboot-1141". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 02:05:35.632 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 02:05:35.673 Jan 29 02:05:35.714: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 02:05:35.714: INFO: Jan 29 02:05:35.760: INFO: Logging node info for node bootstrap-e2e-master Jan 29 02:05:35.802: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 09b38bdb-4830-432f-941a-7f47d2e4cb82 1343 0 2023-01-29 01:56:15 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 01:56:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 02:02:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:02:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:02:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:02:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:15 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:02:07 +0000 UTC,LastTransitionTime:2023-01-29 01:56:36 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.168.48.38,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:817944af0c35e596144cbe0c39ece004,SystemUUID:817944af-0c35-e596-144c-be0c39ece004,BootID:10741312-523c-4032-96d6-5f4f987f3139,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:05:35.802: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 02:05:35.847: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 02:05:35.969: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:05:35.969: INFO: Container kube-apiserver ready: true, restart count 2 Jan 29 02:05:35.969: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:05:35.969: INFO: Container kube-scheduler ready: true, restart count 3 Jan 29 02:05:35.969: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 01:55:48 +0000 UTC (0+1 container statuses recorded) Jan 29 02:05:35.969: INFO: Container kube-addon-manager ready: true, restart count 2 Jan 29 02:05:35.969: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 01:55:30 +0000 UTC (0+1 container statuses recorded) Jan 29 02:05:35.969: INFO: Container etcd-container ready: true, restart count 1 Jan 29 02:05:35.969: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:05:35.969: INFO: Container etcd-container ready: true, restart count 0 Jan 29 02:05:35.969: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:05:35.969: INFO: Container konnectivity-server-container ready: true, restart count 0 Jan 29 02:05:35.969: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 01:55:31 +0000 UTC (0+1 container statuses recorded) Jan 29 02:05:35.969: INFO: Container kube-controller-manager ready: false, restart count 5 Jan 29 02:05:35.969: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 01:55:48 +0000 UTC (0+1 container statuses recorded) Jan 29 02:05:35.969: INFO: Container l7-lb-controller ready: true, restart count 5 Jan 29 02:05:35.969: INFO: metadata-proxy-v0.1-qnhsn started at 2023-01-29 01:56:48 +0000 UTC (0+2 container statuses recorded) Jan 29 02:05:35.969: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 02:05:35.969: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 02:05:36.185: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 02:05:36.185: INFO: Logging node info for node bootstrap-e2e-minion-group-6w15 Jan 29 02:05:36.231: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-6w15 1fb28d13-4bf7-48f6-87ef-e22ff445a0fa 1516 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-6w15 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:03:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 02:03:33 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 02:04:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-6w15,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:04:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:04:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:04:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:04:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:04:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:04:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:04:54 +0000 UTC,LastTransitionTime:2023-01-29 02:04:53 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:03:33 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:03:33 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:03:33 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:03:33 +0000 UTC,LastTransitionTime:2023-01-29 02:03:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.233.188.19,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-6w15.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4953e80002e138ed6b9c633aa1bea962,SystemUUID:4953e800-02e1-38ed-6b9c-633aa1bea962,BootID:fea77b5d-8538-44ed-871c-4e8dede117f9,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:05:36.231: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-6w15 Jan 29 02:05:36.296: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-6w15 Jan 29 02:05:36.358: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-6w15: error trying to reach service: dial tcp 10.138.0.5:10250: connect: connection refused Jan 29 02:05:36.358: INFO: Logging node info for node bootstrap-e2e-minion-group-7c3d Jan 29 02:05:36.400: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7c3d 8e1fb573-c544-42e8-afb6-9489bf273e1f 1498 0 2023-01-29 01:56:18 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7c3d kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 02:02:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:03:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 02:04:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-7c3d,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2023-01-29 02:03:33 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:02:02 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:02:02 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:02:02 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:02:02 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:02:02 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:02:02 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:02:02 +0000 UTC,LastTransitionTime:2023-01-29 02:02:01 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:04:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:04:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:04:38 +0000 UTC,LastTransitionTime:2023-01-29 01:56:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:04:38 +0000 UTC,LastTransitionTime:2023-01-29 02:04:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.28.1,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7c3d.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e82fc84d3d165f0af5fb24e7309ec0f6,SystemUUID:e82fc84d-3d16-5f0a-f5fb-24e7309ec0f6,BootID:d8228130-72eb-4a47-9a62-918a765d9db2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:05:36.400: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7c3d Jan 29 02:05:36.444: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7c3d Jan 29 02:05:36.504: INFO: kube-proxy-bootstrap-e2e-minion-group-7c3d started at 2023-01-29 02:03:34 +0000 UTC (0+1 container statuses recorded) Jan 29 02:05:36.504: INFO: Container kube-proxy ready: true, restart count 5 Jan 29 02:05:36.504: INFO: metadata-proxy-v0.1-pn2qm started at 2023-01-29 01:56:19 +0000 UTC (0+2 container statuses recorded) Jan 29 02:05:36.504: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 02:05:36.504: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 02:05:36.504: INFO: konnectivity-agent-rw7fw started at 2023-01-29 01:56:32 +0000 UTC (0+1 container statuses recorded) Jan 29 02:05:36.504: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 29 02:05:36.504: INFO: metrics-server-v0.5.2-867b8754b9-kkpk2 started at 2023-01-29 01:56:57 +0000 UTC (0+2 container statuses recorded) Jan 29 02:05:36.504: INFO: Container metrics-server ready: true, restart count 4 Jan 29 02:05:36.504: INFO: Container metrics-server-nanny ready: true, restart count 3 Jan 29 02:05:36.677: INFO: Latency metrics for node bootstrap-e2e-minion-group-7c3d Jan 29 02:05:36.677: INFO: Logging node info for node bootstrap-e2e-minion-group-s51h Jan 29 02:05:36.719: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-s51h 889261a3-c23b-4a70-8491-293cc30164ed 1520 0 2023-01-29 01:56:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-s51h kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 01:56:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 01:56:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 02:03:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 02:03:33 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 02:04:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-reboot-1-4/us-west1-b/bootstrap-e2e-minion-group-s51h,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 02:04:56 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 02:04:56 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 02:04:56 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 02:04:56 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 02:04:56 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 02:04:56 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 02:04:56 +0000 UTC,LastTransitionTime:2023-01-29 02:04:55 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 01:56:32 +0000 UTC,LastTransitionTime:2023-01-29 01:56:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 02:03:33 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 02:03:33 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 02:03:33 +0000 UTC,LastTransitionTime:2023-01-29 01:56:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 02:03:33 +0000 UTC,LastTransitionTime:2023-01-29 02:03:33 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.145.127.28,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-s51h.c.k8s-jkns-gci-gce-reboot-1-4.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e84ea8c5f84b48682cb3668f2d7a776c,SystemUUID:e84ea8c5-f84b-4868-2cb3-668f2d7a776c,BootID:85788ea1-728c-416b-9c5b-f338dea7f258,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 02:05:36.719: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-s51h Jan 29 02:05:36.764: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-s51h Jan 29 02:05:36.810: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-s51h: error trying to reach service: dial tcp 10.138.0.4:10250: connect: connection refused END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 02:05:36.81 (1.177s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 02:05:36.81 (1.177s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:05:36.81 STEP: Destroying namespace "reboot-1141" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 02:05:36.81 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 02:05:36.854 (44ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:05:36.854 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 02:05:36.854 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 02:05:35.536
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 02:02:34.801 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 02:02:34.801 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 02:02:34.801 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 02:02:34.801 Jan 29 02:02:34.801: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 02:02:34.802 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 02:03:33.343 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 02:03:33.424 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 02:03:33.526 (58.725s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 02:03:33.526 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 02:03:33.526 (0s) > Enter [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 02:03:33.526 Jan 29 02:03:33.591: INFO: Condition Ready of node bootstrap-e2e-minion-group-7c3d is false instead of true. Reason: KubeletNotReady, message: [PLEG is not healthy: pleg has yet to be successful, container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized] Jan 29 02:03:33.642: INFO: Getting bootstrap-e2e-minion-group-6w15 Jan 29 02:03:33.642: INFO: Getting bootstrap-e2e-minion-group-s51h Jan 29 02:03:33.684: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-6w15 condition Ready to be true Jan 29 02:03:33.714: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-s51h condition Ready to be true Jan 29 02:03:33.727: INFO: Node bootstrap-e2e-minion-group-6w15 has 4 assigned pods with no liveness probes: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9] Jan 29 02:03:33.727: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9] Jan 29 02:03:33.727: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bv2w9" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:03:33.728: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:03:33.728: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-6w15" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:03:33.728: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-fths2" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:03:33.757: INFO: Node bootstrap-e2e-minion-group-s51h has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-s51h metadata-proxy-v0.1-bff8h] Jan 29 02:03:33.757: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-s51h metadata-proxy-v0.1-bff8h] Jan 29 02:03:33.757: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bff8h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:03:33.757: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-s51h" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 02:03:33.770: INFO: Pod "metadata-proxy-v0.1-bv2w9": Phase="Running", Reason="", readiness=true. Elapsed: 42.855313ms Jan 29 02:03:33.770: INFO: Pod "metadata-proxy-v0.1-bv2w9" satisfied condition "running and ready, or succeeded" Jan 29 02:03:33.771: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 43.591887ms Jan 29 02:03:33.771: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 02:03:33.771: INFO: Pod "kube-dns-autoscaler-5f6455f985-fths2": Phase="Running", Reason="", readiness=true. Elapsed: 43.514235ms Jan 29 02:03:33.771: INFO: Pod "kube-dns-autoscaler-5f6455f985-fths2" satisfied condition "running and ready, or succeeded" Jan 29 02:03:33.772: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6w15": Phase="Running", Reason="", readiness=true. Elapsed: 44.667481ms Jan 29 02:03:33.772: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-6w15" satisfied condition "running and ready, or succeeded" Jan 29 02:03:33.772: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-fths2 kube-proxy-bootstrap-e2e-minion-group-6w15 metadata-proxy-v0.1-bv2w9] Jan 29 02:03:33.772: INFO: Getting external IP address for bootstrap-e2e-minion-group-6w15 Jan 29 02:03:33.772: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-6w15(35.233.188.19:22) Jan 29 02:03:33.800: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s51h": Phase="Running", Reason="", readiness=true. Elapsed: 42.812329ms Jan 29 02:03:33.800: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-s51h" satisfied condition "running and ready, or succeeded" Jan 29 02:03:33.800: INFO: Pod "metadata-proxy-v0.1-bff8h": Phase="Running", Reason="", readiness=true. Elapsed: 43.02646ms Jan 29 02:03:33.800: INFO: Pod "metadata-proxy-v0.1-bff8h" satisfied condition "running and ready, or succeeded" Jan 29 02:03:33.800: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-s51h metadata-proxy-v0.1-bff8h] Jan 29 02:03:33.800: INFO: Getting external IP address for bootstrap-e2e-minion-group-s51h Jan 29 02:03:33.800: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-s51h(34.145.127.28:22) Jan 29 02:03:34.305: INFO: ssh prow@35.233.188.19:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 02:03:34.305: INFO: ssh prow@35.233.188.19:22: stdout: "" Jan 29 02:03:34.305: INFO: ssh prow@35.233.188.19:22: stderr: "" Jan 29 02:03:34.305: INFO: ssh prow@35.233.188.19:22: exit code: 0 Jan 29 02:03:34.305: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-6w15 condition Ready to be false Jan 29 02:03:34.344: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:34.354: INFO: ssh prow@34.145.127.28:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 02:03:34.354: INFO: ssh prow@34.145.127.28:22: stdout: "" Jan 29 02:03:34.354: INFO: ssh prow@34.145.127.28:22: stderr: "" Jan 29 02:03:34.354: INFO: ssh prow@34.145.127.28:22: exit code: 0 Jan 29 02:03:34.354: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-s51h condition Ready to be false Jan 29 02:03:34.393: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:36.384: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:36.434: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:38.425: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:38.475: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:40.464: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:40.515: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:42.505: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:42.556: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:44.546: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:44.596: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:46.587: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:46.636: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:48.628: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:48.677: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:50.669: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:50.717: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:52.710: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:52.758: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:54.750: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:54.798: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:56.791: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:56.838: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:03:58.831: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:03:58.878: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:00.871: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:00.919: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:02.911: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:02.959: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:04.951: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:04.998: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:06.992: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:07.039: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:09.032: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:09.080: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:11.071: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:11.120: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:13.112: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:13.161: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:15.152: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:15.201: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:17.191: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:17.241: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:19.232: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:19.281: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:21.274: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:21.321: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:23.313: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:23.362: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:25.355: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:25.403: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:27.395: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:27.443: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:29.436: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:29.483: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:31.476: INFO: Couldn't get node bootstrap-e2e-minion-group-6w15 Jan 29 02:04:31.523: INFO: Couldn't get node bootstrap-e2e-minion-group-s51h Jan 29 02:04:38.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:38.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:40.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:40.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:42.277: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:42.277: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:44.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:44.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:46.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:46.370: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:48.417: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:48.417: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:50.464: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:50.464: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:52.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:52.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:54.575: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:54.575: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:56.621: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:56.621: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:58.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:04:58.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:00.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:00.713: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:02.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:02.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:04.809: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:04.809: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:06.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:06.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:08.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:08.906: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:10.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:10.952: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:12.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:12.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:15.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:15.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:17.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:17.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:19.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:19.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:21.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:21.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:23.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:23.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:25.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:25.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:27.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:27.396: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:29.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:29.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:31.489: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:31.489: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:33.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-s51h is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:33.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-6w15 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 02:05:35.535: INFO: Node bootstrap-e2e-minion-group-6w15 didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:05:35.535: INFO: Node bootstrap-e2e-minion-group-s51h didn't reach desired Ready condition status (false) within 2m0s Jan 29 02:05:35.535: INFO: Node bootstrap-e2e-minion-group-6w15 failed reboot test. Jan 29 02:05:35.535: INFO: Node bootstrap-e2e-minion-group-s51h failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 02:05:35.536 < Exit [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/29/23 02:05:35.536 (2m2.01s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 02:05:35.536 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 02:05:35.536 Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-2nvv4 to bootstrap-e2e-minion-group-6w15 Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} FailedMount: MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 4.229909205s (4.229917066s including waiting) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container coredns Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container coredns Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.7:8181/ready": dial tcp 10.64.3.7:8181: connect: connection refused Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container coredns Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-2nvv4_kube-system(c5a7c76e-33f7-4271-a7f7-8f4b6013857d) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-2nvv4: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Readiness probe failed: Get "http://10.64.3.18:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-sch2n to bootstrap-e2e-minion-group-s51h Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 968.405842ms (968.417139ms including waiting) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container coredns Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container coredns Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container coredns Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f-sch2n: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Readiness probe failed: Get "http://10.64.2.4:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-2nvv4 Jan 29 02:05:35.587: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-sch2n Jan 29 02:05:35.587: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 02:05:35.587: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 02:05:35.587: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 02:05:35.587: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 02:05:35.587: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 02:05:35.587: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 02:05:35.587: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3d580 became leader Jan 29 02:05:35.587: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b84f3 became leader Jan 29 02:05:35.587: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_1f6a8 became leader Jan 29 02:05:35.587: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_d2447 became leader Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-krs9s to bootstrap-e2e-minion-group-s51h Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 589.41049ms (589.437215ms including waiting) Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:05:35.587: INFO: event for konnectivity-agent-krs9s: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rw7fw to bootstrap-e2e-minion-group-7c3d Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 627.397814ms (627.417417ms including waiting) Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Unhealthy: Liveness probe failed: Get "http://10.64.1.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-rw7fw: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-x4gbp to bootstrap-e2e-minion-group-6w15 Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 2.54378487s (2.543795192s including waiting) Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container konnectivity-agent Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-x4gbp_kube-system(5cc4536d-8554-405a-ac44-b9cd0b3e7168) Jan 29 02:05:35.587: INFO: event for konnectivity-agent-x4gbp: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.12:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rw7fw Jan 29 02:05:35.587: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-x4gbp Jan 29 02:05:35.587: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-krs9s Jan 29 02:05:35.587: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 02:05:35.587: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 02:05:35.587: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 02:05:35.587: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 02:05:35.587: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 02:05:35.587: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 02:05:35.587: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:05:35.587: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 02:05:35.587: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:05:35.587: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 02:05:35.587: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 02:05:35.587: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 02:05:35.587: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 02:05:35.587: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_8c88f9f3-0fcf-4820-9f5f-5ee5c968f50d became leader Jan 29 02:05:35.587: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_e5ddf3f0-26c9-4d3b-ba00-8f32b5849ba5 became leader Jan 29 02:05:35.587: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_f4908364-bab0-42a0-b122-c2caa2e85f9f became leader Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-fths2 to bootstrap-e2e-minion-group-6w15 Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 4.452281102s (4.452289884s including waiting) Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container autoscaler Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container autoscaler Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container autoscaler Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985-fths2: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-fths2_kube-system(29242a59-ceae-4689-899f-a4b3bcf58fbe) Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-fths2 Jan 29 02:05:35.587: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Stopping container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-6w15: {kubelet bootstrap-e2e-minion-group-6w15} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-6w15_kube-system(04a1e6edd54c1866478f181a6bf60b38) Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} Killing: Stopping container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7c3d: {kubelet bootstrap-e2e-minion-group-7c3d} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7c3d_kube-system(de9cc9049f2a2a0648059b57c3cc7127) Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} Killing: Stopping container kube-proxy Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-s51h_kube-system(2451b12f9e04e1f8e16fde66c2622fcd) Jan 29 02:05:35.587: INFO: event for kube-proxy-bootstrap-e2e-minion-group-s51h: {kubelet bootstrap-e2e-minion-group-s51h} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 02:05:35.587: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 02:05:35.587: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 02:05:35.587: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 02:05:35.587: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 02:05:35.587: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_a9b313b0-f9fa-43de-b979-0958c05e1287 became leader Jan 29 02:05:35.587: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_ecac3899-f709-4f43-824f-37faa839889c became leader Jan 29 02:05:35.587: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_460317a8-6d35-4656-87b9-0d8d3533477a became leader Jan 29 02:05:35.587: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_627862b6-098a-451d-a466-095484f8ed41 became leader Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-9bf57 to bootstrap-e2e-minion-group-6w15 Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 573.484189ms (573.492084ms including waiting) Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container default-http-backend Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container default-http-backend Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Unhealthy: Liveness probe failed: Get "http://10.64.3.5:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99-9bf57: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 02:05:35.587: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-9bf57 Jan 29 02:05:35.587: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 02:05:35.587: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 02:05:35.587: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 02:05:35.587: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 02:05:35.587: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 02:05:35.587: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 02:05:35.587: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bff8h to bootstrap-e2e-minion-group-s51h Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 737.160338ms (737.179651ms including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.876782326s (1.876796204s including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Created: Created container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bff8h: {kubelet bootstrap-e2e-minion-group-s51h} Started: Started container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bv2w9 to bootstrap-e2e-minion-group-6w15 Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 680.977248ms (680.991364ms including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.818844362s (1.818852935s including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Created: Created container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-bv2w9: {kubelet bootstrap-e2e-minion-group-6w15} Started: Started container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-pn2qm to bootstrap-e2e-minion-group-7c3d Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 679.514836ms (679.523319ms including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.788401445s (1.788433466s including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Created: Created container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-pn2qm: {kubelet bootstrap-e2e-minion-group-7c3d} Started: Started container prometheus-to-sd-exporter Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-qnhsn to bootstrap-e2e-master Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 888.975253ms (888.981818ms including waiting) Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 02:05:35.587: INFO: event for metadata-proxy-v0.1-qnhsn: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.94842067s (1.94843