go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 09:08:20.793from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 09:06:02.202 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 09:06:02.203 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 09:06:02.203 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 09:06:02.203 Jan 30 09:06:02.203: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 09:06:02.204 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 09:06:02.557 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 09:06:02.645 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 09:06:02.726 (524ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 09:06:02.726 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 09:06:02.727 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/30/23 09:06:02.727 Jan 30 09:06:02.912: INFO: Getting bootstrap-e2e-minion-group-hx8v Jan 30 09:06:02.912: INFO: Getting bootstrap-e2e-minion-group-ctd3 Jan 30 09:06:02.912: INFO: Getting bootstrap-e2e-minion-group-7cr1 Jan 30 09:06:02.957: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7cr1 condition Ready to be true Jan 30 09:06:02.957: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be true Jan 30 09:06:02.957: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hx8v condition Ready to be true Jan 30 09:06:03.002: INFO: Node bootstrap-e2e-minion-group-ctd3 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:06:03.002: INFO: Node bootstrap-e2e-minion-group-7cr1 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-7cr1 metadata-proxy-v0.1-f6lhm] Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hb8pr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-7cr1 metadata-proxy-v0.1-f6lhm] Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-f6lhm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7cr1" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.002: INFO: Node bootstrap-e2e-minion-group-hx8v has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.003: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.003: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-xdrbh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.003: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ljgk8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.054: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh": Phase="Running", Reason="", readiness=true. Elapsed: 51.751762ms Jan 30 09:06:03.054: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: Pod "metadata-proxy-v0.1-hb8pr": Phase="Running", Reason="", readiness=true. Elapsed: 53.970317ms Jan 30 09:06:03.056: INFO: Pod "metadata-proxy-v0.1-hb8pr" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 53.720689ms Jan 30 09:06:03.056: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: Pod "metadata-proxy-v0.1-f6lhm": Phase="Running", Reason="", readiness=true. Elapsed: 54.007235ms Jan 30 09:06:03.056: INFO: Pod "metadata-proxy-v0.1-f6lhm" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7cr1": Phase="Running", Reason="", readiness=true. Elapsed: 53.925475ms Jan 30 09:06:03.056: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7cr1" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-7cr1 metadata-proxy-v0.1-f6lhm] Jan 30 09:06:03.056: INFO: Getting external IP address for bootstrap-e2e-minion-group-7cr1 Jan 30 09:06:03.056: INFO: Pod "metadata-proxy-v0.1-ljgk8": Phase="Running", Reason="", readiness=true. Elapsed: 53.762291ms Jan 30 09:06:03.056: INFO: Pod "metadata-proxy-v0.1-ljgk8" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-7cr1(34.82.80.94:22) Jan 30 09:06:03.056: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=true. Elapsed: 53.843884ms Jan 30 09:06:03.056: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:06:03.056: INFO: Getting external IP address for bootstrap-e2e-minion-group-hx8v Jan 30 09:06:03.056: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-hx8v(34.127.2.148:22) Jan 30 09:06:03.058: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=true. Elapsed: 55.415477ms Jan 30 09:06:03.058: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.058: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:06:03.058: INFO: Getting external IP address for bootstrap-e2e-minion-group-ctd3 Jan 30 09:06:03.058: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-ctd3(35.197.47.9:22) Jan 30 09:06:03.582: INFO: ssh prow@35.197.47.9:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 09:06:03.582: INFO: ssh prow@35.197.47.9:22: stdout: "" Jan 30 09:06:03.582: INFO: ssh prow@35.197.47.9:22: stderr: "" Jan 30 09:06:03.582: INFO: ssh prow@35.197.47.9:22: exit code: 0 Jan 30 09:06:03.582: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be false Jan 30 09:06:03.582: INFO: ssh prow@34.127.2.148:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 09:06:03.582: INFO: ssh prow@34.127.2.148:22: stdout: "" Jan 30 09:06:03.582: INFO: ssh prow@34.127.2.148:22: stderr: "" Jan 30 09:06:03.582: INFO: ssh prow@34.127.2.148:22: exit code: 0 Jan 30 09:06:03.582: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-hx8v condition Ready to be false Jan 30 09:06:03.589: INFO: ssh prow@34.82.80.94:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 09:06:03.589: INFO: ssh prow@34.82.80.94:22: stdout: "" Jan 30 09:06:03.589: INFO: ssh prow@34.82.80.94:22: stderr: "" Jan 30 09:06:03.589: INFO: ssh prow@34.82.80.94:22: exit code: 0 Jan 30 09:06:03.589: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-7cr1 condition Ready to be false Jan 30 09:06:03.646: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:03.646: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:03.648: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:05.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:05.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:05.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:07.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:07.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:07.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:09.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:09.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:09.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:11.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:11.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:11.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:13.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:13.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:13.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:15.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:15.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:15.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:17.977: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:17.979: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:17.979: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:20.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:20.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:20.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:22.066: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:22.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:22.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:24.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:24.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:24.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:26.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:26.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:26.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:28.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:28.203: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:28.203: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:30.241: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:30.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:30.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:32.286: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:32.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:32.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:34.330: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:34.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:34.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:54.199: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:54.199: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:54.199: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:56.244: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:56.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:56.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:58.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:58.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:58.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:00.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:00.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:00.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:02.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:02.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:02.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:04.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:04.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:04.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:06.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:06.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:06.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:08.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:08.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:08.525: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:10.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:10.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:10.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:12.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:12.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:12.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:14.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:14.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:14.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:16.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:16.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:16.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:18.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:18.760: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:18.760: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:20.805: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:20.805: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:20.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:22.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:22.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:22.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:24.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:24.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:24.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:26.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:26.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:26.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:28.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:28.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:28.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:31.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:31.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:31.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:33.081: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:33.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:33.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:35.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:35.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:35.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:37.190: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:37.190: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:37.190: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:39.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:39.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:39.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:41.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:41.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:41.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:43.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:43.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:43.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:45.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:45.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:45.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:47.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:47.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:47.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:49.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:49.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:49.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:51.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:51.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:51.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:53.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:53.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:53.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:55.610: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:55.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:55.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:57.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:57.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:57.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:59.698: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:59.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:59.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:01.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:01.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:01.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:03.743: INFO: Node bootstrap-e2e-minion-group-hx8v didn't reach desired Ready condition status (false) within 2m0s Jan 30 09:08:03.747: INFO: Node bootstrap-e2e-minion-group-ctd3 didn't reach desired Ready condition status (false) within 2m0s Jan 30 09:08:03.747: INFO: Node bootstrap-e2e-minion-group-7cr1 didn't reach desired Ready condition status (false) within 2m0s Jan 30 09:08:03.747: INFO: Node bootstrap-e2e-minion-group-7cr1 failed reboot test. Jan 30 09:08:03.747: INFO: Node bootstrap-e2e-minion-group-ctd3 failed reboot test. Jan 30 09:08:03.747: INFO: Node bootstrap-e2e-minion-group-hx8v failed reboot test. Jan 30 09:08:03.748: INFO: Executing termination hook on nodes Jan 30 09:08:03.748: INFO: Getting external IP address for bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:03.748: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-7cr1(34.82.80.94:22) Jan 30 09:08:19.749: INFO: ssh prow@34.82.80.94:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 09:08:19.749: INFO: ssh prow@34.82.80.94:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 09:06:13 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 09:08:19.749: INFO: ssh prow@34.82.80.94:22: stderr: "" Jan 30 09:08:19.749: INFO: ssh prow@34.82.80.94:22: exit code: 0 Jan 30 09:08:19.749: INFO: Getting external IP address for bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:19.749: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-ctd3(35.197.47.9:22) Jan 30 09:08:20.269: INFO: ssh prow@35.197.47.9:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 09:08:20.269: INFO: ssh prow@35.197.47.9:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 09:06:13 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 09:08:20.269: INFO: ssh prow@35.197.47.9:22: stderr: "" Jan 30 09:08:20.269: INFO: ssh prow@35.197.47.9:22: exit code: 0 Jan 30 09:08:20.269: INFO: Getting external IP address for bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.269: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-hx8v(34.127.2.148:22) Jan 30 09:08:20.792: INFO: ssh prow@34.127.2.148:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 09:08:20.792: INFO: ssh prow@34.127.2.148:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 09:06:13 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 09:08:20.792: INFO: ssh prow@34.127.2.148:22: stderr: "" Jan 30 09:08:20.792: INFO: ssh prow@34.127.2.148:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 09:08:20.793 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/30/23 09:08:20.793 (2m18.066s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 09:08:20.793 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 09:08:20.793 Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-5d7s9 to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.12:8181/ready": dial tcp 10.64.0.12:8181: connect: connection refused Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5d7s9_kube-system(7bd270c5-f2ec-4a85-9058-86135914ebab) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.15:8181/ready": dial tcp 10.64.0.15:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.15:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-w57z6 to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.519322897s (3.519341369s including waiting) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.4:8181/ready": dial tcp 10.64.0.4:8181: connect: connection refused Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.10:8181/ready": dial tcp 10.64.0.10:8181: connect: connection refused Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-w57z6_kube-system(1e79e82a-e647-48da-a4fd-05ad6d505eef) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.14:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-w57z6 Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-5d7s9 Jan 30 09:08:20.859: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 30 09:08:20.859: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 30 09:08:20.859: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 09:08:20.859: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 09:08:20.859: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 09:08:20.859: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 30 09:08:20.859: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 09:08:20.859: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_94038 became leader Jan 30 09:08:20.859: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_32bba became leader Jan 30 09:08:20.859: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b3e6 became leader Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-b8sc4 to bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 629.593814ms (629.614416ms including waiting) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.3:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Failed: Error: failed to get sandbox container task: no running task found: task 1d9c817ce846f529aa76391072c1a7fd56a9f47957fc17a2690b2671de27ff84 not found: not found Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b8sc4_kube-system(f6d868e6-1c3b-43a3-ad9d-01a41c072da7) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rj7fc to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.815723669s (1.81573909s including waiting) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.7:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Failed: Error: failed to get sandbox container task: no running task found: task 11f3dcad8b3972dd50b4e21b10c349a64def00d0106a07d500fcf4637de4bd0d not found: not found Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.17:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-skfnx to bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 625.155725ms (625.171974ms including waiting) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:08:20.859: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rj7fc Jan 30 09:08:20.859: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-b8sc4 Jan 30 09:08:20.859: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-skfnx Jan 30 09:08:20.859: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 30 09:08:20.859: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 30 09:08:20.859: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 30 09:08:20.859: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 30 09:08:20.859: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:08:20.859: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:08:20.859: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 30 09:08:20.859: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 30 09:08:20.859: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(4fc5a5aeac3c203e3876adb08d878c93) Jan 30 09:08:20.859: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ce182e7b-00b7-4169-8624-f53196308681 became leader Jan 30 09:08:20.859: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_274fd4ca-797b-43c2-b1b6-f36d9e36c2e7 became leader Jan 30 09:08:20.859: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_4b65a89d-ba5e-49e4-8048-9ec50f56a58a became leader Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-xdrbh to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.161986755s (3.162044961s including waiting) Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container autoscaler Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container autoscaler Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-xdrbh Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7cr1_kube-system(dd1d9c1acf429448066a68f4147cfb77) Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ctd3_kube-system(f92a9aed872df1bead32b1c0dd213385) Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hx8v_kube-system(acb97e253f2500aa0581d024a2217293) Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(5b3c0a3dad3d723f9e5778ab0a62849c) Jan 30 09:08:20.859: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_34628f7a-9073-4ee1-9bb3-51be47583fdb became leader Jan 30 09:08:20.859: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_914f6c8b-8db8-44f8-a433-b4e094f84179 became leader Jan 30 09:08:20.859: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_7f5e92cb-6a3a-45d0-be98-a7453645cadf became leader Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-fq84f to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.812003002s (1.812012686s including waiting) Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container default-http-backend Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container default-http-backend Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-fq84f Jan 30 09:08:20.859: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 30 09:08:20.859: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 30 09:08:20.859: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 30 09:08:20.859: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 30 09:08:20.859: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-d2qbs to bootstrap-e2e-master Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 767.072137ms (767.083529ms including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.898844974s (1.898853058s including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-f6lhm to bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 760.69768ms (760.732368ms including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.890232448s (1.890241652s including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-hb8pr to bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 790.982977ms (791.000841ms including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.062535103s (2.062546601s including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-ljgk8 to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 732.378395ms (732.411068ms including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.82877905s (1.828788865s including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-ljgk8 Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-d2qbs Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-hb8pr Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-f6lhm Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-v25xc to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.900251051s (3.900291297s including waiting) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metrics-server Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metrics-server Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.359025956s (3.35903606s including waiting) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metrics-server-nanny Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metrics-server-nanny Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container metrics-server Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container metrics-server-nanny Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-v25xc Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-v25xc Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-q4757 to bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.428302629s (1.428313025s including waiting) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.023447114s (1.023460341s including waiting) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server-nanny Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server-nanny Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container metrics-server Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-q4757 Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.510528775s (3.510537487s including waiting) Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container volume-snapshot-controller Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container volume-snapshot-controller Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container volume-snapshot-controller Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(c2d42366-14d4-4e0b-bcd7-a6055ffe56f2) Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 09:08:20.859 (67ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 09:08:20.859 Jan 30 09:08:20.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 09:08:20.904 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 09:08:20.904 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 09:08:20.904 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 09:08:20.904 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 09:08:20.904 STEP: Collecting events from namespace "reboot-771". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 09:08:20.904 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/30/23 09:08:20.947 Jan 30 09:08:20.988: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 09:08:20.988: INFO: Jan 30 09:08:21.031: INFO: Logging node info for node bootstrap-e2e-master Jan 30 09:08:21.074: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master a34af008-0528-47e4-a6c5-cd39d827847f 728 0 2023-01-30 09:04:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-30 09:04:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:04:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 09:05:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:05:20 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:05:20 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:05:20 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:05:20 +0000 UTC,LastTransitionTime:2023-01-30 09:04:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.185.231.33,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:87a05ebeec11f95c366dec3ebfb54572,SystemUUID:87a05ebe-ec11-f95c-366d-ec3ebfb54572,BootID:b21fbdba-5e8a-4560-8e5c-0b3f13ec273b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:135961043,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:125279033,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:57551672,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:08:21.075: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 30 09:08:21.120: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 30 09:08:21.183: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container etcd-container ready: true, restart count 2 Jan 30 09:08:21.183: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container konnectivity-server-container ready: true, restart count 1 Jan 30 09:08:21.183: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 30 09:08:21.183: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container kube-scheduler ready: true, restart count 2 Jan 30 09:08:21.183: INFO: metadata-proxy-v0.1-d2qbs started at 2023-01-30 09:04:49 +0000 UTC (0+2 container statuses recorded) Jan 30 09:08:21.183: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 09:08:21.183: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 09:08:21.183: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container etcd-container ready: true, restart count 1 Jan 30 09:08:21.183: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container kube-apiserver ready: true, restart count 0 Jan 30 09:08:21.183: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-30 09:03:44 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container kube-addon-manager ready: true, restart count 0 Jan 30 09:08:21.183: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-30 09:03:44 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container l7-lb-controller ready: true, restart count 4 Jan 30 09:08:21.374: INFO: Latency metrics for node bootstrap-e2e-master Jan 30 09:08:21.374: INFO: Logging node info for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:21.417: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7cr1 059c215f-20bf-4d2a-9d08-dd76e71cd121 697 0 2023-01-30 09:04:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7cr1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-30 09:04:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:04:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 09:04:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-7cr1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:04:44 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:04:44 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:04:44 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:04:44 +0000 UTC,LastTransitionTime:2023-01-30 09:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.80.94,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7cr1.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7cr1.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:152f308b39d31a7b07927ba8747dc4e6,SystemUUID:152f308b-39d3-1a7b-0792-7ba8747dc4e6,BootID:4059308d-f9aa-4ce2-864c-66c4173d2fb3,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:08:21.417: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:21.461: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:21.522: INFO: kube-proxy-bootstrap-e2e-minion-group-7cr1 started at 2023-01-30 09:04:13 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.522: INFO: Container kube-proxy ready: true, restart count 2 Jan 30 09:08:21.522: INFO: metadata-proxy-v0.1-f6lhm started at 2023-01-30 09:04:15 +0000 UTC (0+2 container statuses recorded) Jan 30 09:08:21.522: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 09:08:21.522: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 09:08:21.522: INFO: konnectivity-agent-b8sc4 started at 2023-01-30 09:04:29 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.522: INFO: Container konnectivity-agent ready: true, restart count 2 Jan 30 09:08:21.690: INFO: Latency metrics for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:21.690: INFO: Logging node info for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:21.732: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ctd3 1fc63985-9867-4666-aa14-c3224e06ef55 727 0 2023-01-30 09:04:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ctd3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:04:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-30 09:04:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:04:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 09:05:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-ctd3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:05:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:05:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:05:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:05:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.47.9,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ctd3.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ctd3.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:589c645a71700ad5ee732b565ea0a6c2,SystemUUID:589c645a-7170-0ad5-ee73-2b565ea0a6c2,BootID:26986229-89df-4d64-a832-0aafc321f0d5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:08:21.732: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:21.776: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:21.843: INFO: kube-proxy-bootstrap-e2e-minion-group-ctd3 started at 2023-01-30 09:04:13 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.843: INFO: Container kube-proxy ready: false, restart count 2 Jan 30 09:08:21.843: INFO: metadata-proxy-v0.1-hb8pr started at 2023-01-30 09:04:13 +0000 UTC (0+2 container statuses recorded) Jan 30 09:08:21.843: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 09:08:21.843: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 09:08:21.843: INFO: konnectivity-agent-skfnx started at 2023-01-30 09:04:29 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.843: INFO: Container konnectivity-agent ready: true, restart count 2 Jan 30 09:08:21.843: INFO: metrics-server-v0.5.2-867b8754b9-q4757 started at 2023-01-30 09:04:40 +0000 UTC (0+2 container statuses recorded) Jan 30 09:08:21.843: INFO: Container metrics-server ready: true, restart count 2 Jan 30 09:08:21.843: INFO: Container metrics-server-nanny ready: true, restart count 2 Jan 30 09:08:22.015: INFO: Latency metrics for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:22.015: INFO: Logging node info for node bootstrap-e2e-minion-group-hx8v Jan 30 09:08:22.058: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-hx8v 2cf6f8aa-df64-4aca-ac1b-6cbf533da69a 686 0 2023-01-30 09:04:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-hx8v kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:04:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-30 09:04:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:04:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 09:04:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-hx8v,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:15 +0000 UTC,LastTransitionTime:2023-01-30 09:04:15 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:04:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:04:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:04:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:04:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.2.148,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-hx8v.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-hx8v.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:be2afed7762cfdb54d3ec5133fceeff6,SystemUUID:be2afed7-762c-fdb5-4d3e-c5133fceeff6,BootID:94ca5323-3f83-4283-9c8c-7af34b0fc368,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:08:22.059: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hx8v Jan 30 09:08:22.109: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hx8v Jan 30 09:08:22.170: INFO: l7-default-backend-8549d69d99-fq84f started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container default-http-backend ready: true, restart count 1 Jan 30 09:08:22.170: INFO: volume-snapshot-controller-0 started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container volume-snapshot-controller ready: true, restart count 3 Jan 30 09:08:22.170: INFO: konnectivity-agent-rj7fc started at 2023-01-30 09:04:16 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container konnectivity-agent ready: true, restart count 2 Jan 30 09:08:22.170: INFO: kube-proxy-bootstrap-e2e-minion-group-hx8v started at 2023-01-30 09:04:09 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container kube-proxy ready: false, restart count 2 Jan 30 09:08:22.170: INFO: coredns-6846b5b5f-w57z6 started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container coredns ready: false, restart count 2 Jan 30 09:08:22.170: INFO: coredns-6846b5b5f-5d7s9 started at 2023-01-30 09:04:22 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container coredns ready: false, restart count 2 Jan 30 09:08:22.170: INFO: metadata-proxy-v0.1-ljgk8 started at 2023-01-30 09:04:10 +0000 UTC (0+2 container statuses recorded) Jan 30 09:08:22.170: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 09:08:22.170: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 09:08:22.170: INFO: kube-dns-autoscaler-5f6455f985-xdrbh started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container autoscaler ready: true, restart count 0 Jan 30 09:08:22.343: INFO: Latency metrics for node bootstrap-e2e-minion-group-hx8v END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 09:08:22.343 (1.439s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 09:08:22.344 (1.439s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 09:08:22.344 STEP: Destroying namespace "reboot-771" for this suite. - test/e2e/framework/framework.go:347 @ 01/30/23 09:08:22.344 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 09:08:22.389 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 09:08:22.389 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 09:08:22.389 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 09:08:20.793from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 09:06:02.202 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 09:06:02.203 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 09:06:02.203 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 09:06:02.203 Jan 30 09:06:02.203: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 09:06:02.204 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 09:06:02.557 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 09:06:02.645 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 09:06:02.726 (524ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 09:06:02.726 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 09:06:02.727 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/30/23 09:06:02.727 Jan 30 09:06:02.912: INFO: Getting bootstrap-e2e-minion-group-hx8v Jan 30 09:06:02.912: INFO: Getting bootstrap-e2e-minion-group-ctd3 Jan 30 09:06:02.912: INFO: Getting bootstrap-e2e-minion-group-7cr1 Jan 30 09:06:02.957: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7cr1 condition Ready to be true Jan 30 09:06:02.957: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be true Jan 30 09:06:02.957: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hx8v condition Ready to be true Jan 30 09:06:03.002: INFO: Node bootstrap-e2e-minion-group-ctd3 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:06:03.002: INFO: Node bootstrap-e2e-minion-group-7cr1 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-7cr1 metadata-proxy-v0.1-f6lhm] Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hb8pr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-7cr1 metadata-proxy-v0.1-f6lhm] Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-f6lhm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7cr1" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.002: INFO: Node bootstrap-e2e-minion-group-hx8v has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:06:03.002: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.003: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.003: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-xdrbh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.003: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ljgk8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:06:03.054: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh": Phase="Running", Reason="", readiness=true. Elapsed: 51.751762ms Jan 30 09:06:03.054: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: Pod "metadata-proxy-v0.1-hb8pr": Phase="Running", Reason="", readiness=true. Elapsed: 53.970317ms Jan 30 09:06:03.056: INFO: Pod "metadata-proxy-v0.1-hb8pr" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 53.720689ms Jan 30 09:06:03.056: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: Pod "metadata-proxy-v0.1-f6lhm": Phase="Running", Reason="", readiness=true. Elapsed: 54.007235ms Jan 30 09:06:03.056: INFO: Pod "metadata-proxy-v0.1-f6lhm" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7cr1": Phase="Running", Reason="", readiness=true. Elapsed: 53.925475ms Jan 30 09:06:03.056: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7cr1" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-7cr1 metadata-proxy-v0.1-f6lhm] Jan 30 09:06:03.056: INFO: Getting external IP address for bootstrap-e2e-minion-group-7cr1 Jan 30 09:06:03.056: INFO: Pod "metadata-proxy-v0.1-ljgk8": Phase="Running", Reason="", readiness=true. Elapsed: 53.762291ms Jan 30 09:06:03.056: INFO: Pod "metadata-proxy-v0.1-ljgk8" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-7cr1(34.82.80.94:22) Jan 30 09:06:03.056: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=true. Elapsed: 53.843884ms Jan 30 09:06:03.056: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.056: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:06:03.056: INFO: Getting external IP address for bootstrap-e2e-minion-group-hx8v Jan 30 09:06:03.056: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-hx8v(34.127.2.148:22) Jan 30 09:06:03.058: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=true. Elapsed: 55.415477ms Jan 30 09:06:03.058: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" satisfied condition "running and ready, or succeeded" Jan 30 09:06:03.058: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:06:03.058: INFO: Getting external IP address for bootstrap-e2e-minion-group-ctd3 Jan 30 09:06:03.058: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-ctd3(35.197.47.9:22) Jan 30 09:06:03.582: INFO: ssh prow@35.197.47.9:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 09:06:03.582: INFO: ssh prow@35.197.47.9:22: stdout: "" Jan 30 09:06:03.582: INFO: ssh prow@35.197.47.9:22: stderr: "" Jan 30 09:06:03.582: INFO: ssh prow@35.197.47.9:22: exit code: 0 Jan 30 09:06:03.582: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be false Jan 30 09:06:03.582: INFO: ssh prow@34.127.2.148:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 09:06:03.582: INFO: ssh prow@34.127.2.148:22: stdout: "" Jan 30 09:06:03.582: INFO: ssh prow@34.127.2.148:22: stderr: "" Jan 30 09:06:03.582: INFO: ssh prow@34.127.2.148:22: exit code: 0 Jan 30 09:06:03.582: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-hx8v condition Ready to be false Jan 30 09:06:03.589: INFO: ssh prow@34.82.80.94:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 30 09:06:03.589: INFO: ssh prow@34.82.80.94:22: stdout: "" Jan 30 09:06:03.589: INFO: ssh prow@34.82.80.94:22: stderr: "" Jan 30 09:06:03.589: INFO: ssh prow@34.82.80.94:22: exit code: 0 Jan 30 09:06:03.589: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-7cr1 condition Ready to be false Jan 30 09:06:03.646: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:03.646: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:03.648: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:05.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:05.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:05.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:07.740: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:07.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:07.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:09.787: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:09.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:09.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:11.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:11.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:11.837: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:13.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:13.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:13.884: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:15.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:15.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:15.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:17.977: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:17.979: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:17.979: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:20.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:20.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:20.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:22.066: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:22.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:22.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:24.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:24.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:24.115: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:26.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:26.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:26.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:28.198: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:28.203: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:28.203: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:30.241: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:30.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:30.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:32.286: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:32.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:32.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:34.330: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:34.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:34.338: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:54.199: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:54.199: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:54.199: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:56.244: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:56.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:56.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:58.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:58.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:06:58.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:00.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:00.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:00.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:02.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:02.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:02.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:04.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:04.429: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:04.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:06.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:06.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:06.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:08.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:08.523: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:08.525: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:10.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:10.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:10.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:12.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:12.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:12.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:14.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:14.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:14.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:16.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:16.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:16.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:18.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:18.760: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:18.760: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:20.805: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:20.805: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:20.806: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:22.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:22.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:22.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:24.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:24.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:24.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:26.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:26.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:26.944: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:28.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:28.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:28.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:31.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:31.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:31.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:33.081: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:33.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:33.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:35.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:35.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:35.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:37.190: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:37.190: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:37.190: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:39.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:39.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:39.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:41.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:41.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:41.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:43.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:43.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:43.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:45.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:45.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:45.374: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:47.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:47.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:47.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:49.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:49.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:49.482: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:51.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:51.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:51.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:53.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:53.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:53.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:55.610: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:55.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:55.614: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:57.655: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:57.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:57.658: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:59.698: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:59.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:07:59.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:01.742: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:01.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:01.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:03.743: INFO: Node bootstrap-e2e-minion-group-hx8v didn't reach desired Ready condition status (false) within 2m0s Jan 30 09:08:03.747: INFO: Node bootstrap-e2e-minion-group-ctd3 didn't reach desired Ready condition status (false) within 2m0s Jan 30 09:08:03.747: INFO: Node bootstrap-e2e-minion-group-7cr1 didn't reach desired Ready condition status (false) within 2m0s Jan 30 09:08:03.747: INFO: Node bootstrap-e2e-minion-group-7cr1 failed reboot test. Jan 30 09:08:03.747: INFO: Node bootstrap-e2e-minion-group-ctd3 failed reboot test. Jan 30 09:08:03.747: INFO: Node bootstrap-e2e-minion-group-hx8v failed reboot test. Jan 30 09:08:03.748: INFO: Executing termination hook on nodes Jan 30 09:08:03.748: INFO: Getting external IP address for bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:03.748: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-7cr1(34.82.80.94:22) Jan 30 09:08:19.749: INFO: ssh prow@34.82.80.94:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 09:08:19.749: INFO: ssh prow@34.82.80.94:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 09:06:13 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 09:08:19.749: INFO: ssh prow@34.82.80.94:22: stderr: "" Jan 30 09:08:19.749: INFO: ssh prow@34.82.80.94:22: exit code: 0 Jan 30 09:08:19.749: INFO: Getting external IP address for bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:19.749: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-ctd3(35.197.47.9:22) Jan 30 09:08:20.269: INFO: ssh prow@35.197.47.9:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 09:08:20.269: INFO: ssh prow@35.197.47.9:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 09:06:13 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 09:08:20.269: INFO: ssh prow@35.197.47.9:22: stderr: "" Jan 30 09:08:20.269: INFO: ssh prow@35.197.47.9:22: exit code: 0 Jan 30 09:08:20.269: INFO: Getting external IP address for bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.269: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-hx8v(34.127.2.148:22) Jan 30 09:08:20.792: INFO: ssh prow@34.127.2.148:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 30 09:08:20.792: INFO: ssh prow@34.127.2.148:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nMon Jan 30 09:06:13 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 30 09:08:20.792: INFO: ssh prow@34.127.2.148:22: stderr: "" Jan 30 09:08:20.792: INFO: ssh prow@34.127.2.148:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 09:08:20.793 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/30/23 09:08:20.793 (2m18.066s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 09:08:20.793 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 09:08:20.793 Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-5d7s9 to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.12:8181/ready": dial tcp 10.64.0.12:8181: connect: connection refused Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5d7s9_kube-system(7bd270c5-f2ec-4a85-9058-86135914ebab) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.15:8181/ready": dial tcp 10.64.0.15:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.15:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-w57z6 to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.519322897s (3.519341369s including waiting) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.4:8181/ready": dial tcp 10.64.0.4:8181: connect: connection refused Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.10:8181/ready": dial tcp 10.64.0.10:8181: connect: connection refused Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-w57z6_kube-system(1e79e82a-e647-48da-a4fd-05ad6d505eef) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.14:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-w57z6 Jan 30 09:08:20.859: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-5d7s9 Jan 30 09:08:20.859: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 30 09:08:20.859: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 09:08:20.859: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 30 09:08:20.859: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 09:08:20.859: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 09:08:20.859: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 09:08:20.859: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 30 09:08:20.859: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 09:08:20.859: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_94038 became leader Jan 30 09:08:20.859: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_32bba became leader Jan 30 09:08:20.859: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b3e6 became leader Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-b8sc4 to bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 629.593814ms (629.614416ms including waiting) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.3:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Failed: Error: failed to get sandbox container task: no running task found: task 1d9c817ce846f529aa76391072c1a7fd56a9f47957fc17a2690b2671de27ff84 not found: not found Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b8sc4_kube-system(f6d868e6-1c3b-43a3-ad9d-01a41c072da7) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rj7fc to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.815723669s (1.81573909s including waiting) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.7:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Failed: Error: failed to get sandbox container task: no running task found: task 11f3dcad8b3972dd50b4e21b10c349a64def00d0106a07d500fcf4637de4bd0d not found: not found Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.17:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-skfnx to bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 625.155725ms (625.171974ms including waiting) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container konnectivity-agent Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:08:20.859: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:08:20.859: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rj7fc Jan 30 09:08:20.859: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-b8sc4 Jan 30 09:08:20.859: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-skfnx Jan 30 09:08:20.859: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 30 09:08:20.859: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 30 09:08:20.859: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 30 09:08:20.859: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 30 09:08:20.859: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:08:20.859: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:08:20.859: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 30 09:08:20.859: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 30 09:08:20.859: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(4fc5a5aeac3c203e3876adb08d878c93) Jan 30 09:08:20.859: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ce182e7b-00b7-4169-8624-f53196308681 became leader Jan 30 09:08:20.859: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_274fd4ca-797b-43c2-b1b6-f36d9e36c2e7 became leader Jan 30 09:08:20.859: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_4b65a89d-ba5e-49e4-8048-9ec50f56a58a became leader Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-xdrbh to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.161986755s (3.162044961s including waiting) Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container autoscaler Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container autoscaler Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-xdrbh Jan 30 09:08:20.859: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7cr1_kube-system(dd1d9c1acf429448066a68f4147cfb77) Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ctd3_kube-system(f92a9aed872df1bead32b1c0dd213385) Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container kube-proxy Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hx8v_kube-system(acb97e253f2500aa0581d024a2217293) Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(5b3c0a3dad3d723f9e5778ab0a62849c) Jan 30 09:08:20.859: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_34628f7a-9073-4ee1-9bb3-51be47583fdb became leader Jan 30 09:08:20.859: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_914f6c8b-8db8-44f8-a433-b4e094f84179 became leader Jan 30 09:08:20.859: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_7f5e92cb-6a3a-45d0-be98-a7453645cadf became leader Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-fq84f to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.812003002s (1.812012686s including waiting) Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container default-http-backend Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container default-http-backend Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 09:08:20.859: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-fq84f Jan 30 09:08:20.859: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 30 09:08:20.859: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 30 09:08:20.859: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 30 09:08:20.859: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 30 09:08:20.859: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-d2qbs to bootstrap-e2e-master Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 767.072137ms (767.083529ms including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.898844974s (1.898853058s including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-f6lhm to bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 760.69768ms (760.732368ms including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.890232448s (1.890241652s including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-hb8pr to bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 790.982977ms (791.000841ms including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.062535103s (2.062546601s including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-ljgk8 to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 732.378395ms (732.411068ms including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metadata-proxy Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.82877905s (1.828788865s including waiting) Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container prometheus-to-sd-exporter Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-ljgk8 Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-d2qbs Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-hb8pr Jan 30 09:08:20.859: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-f6lhm Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-v25xc to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.900251051s (3.900291297s including waiting) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metrics-server Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metrics-server Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.359025956s (3.35903606s including waiting) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metrics-server-nanny Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metrics-server-nanny Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container metrics-server Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container metrics-server-nanny Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-v25xc Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-v25xc Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-q4757 to bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.428302629s (1.428313025s including waiting) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.023447114s (1.023460341s including waiting) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server-nanny Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server-nanny Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container metrics-server Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-q4757 Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 30 09:08:20.859: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-hx8v Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.510528775s (3.510537487s including waiting) Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container volume-snapshot-controller Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container volume-snapshot-controller Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container volume-snapshot-controller Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(c2d42366-14d4-4e0b-bcd7-a6055ffe56f2) Jan 30 09:08:20.859: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 09:08:20.859 (67ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 09:08:20.859 Jan 30 09:08:20.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 09:08:20.904 (45ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 09:08:20.904 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 09:08:20.904 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 09:08:20.904 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 09:08:20.904 STEP: Collecting events from namespace "reboot-771". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 09:08:20.904 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/30/23 09:08:20.947 Jan 30 09:08:20.988: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 09:08:20.988: INFO: Jan 30 09:08:21.031: INFO: Logging node info for node bootstrap-e2e-master Jan 30 09:08:21.074: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master a34af008-0528-47e4-a6c5-cd39d827847f 728 0 2023-01-30 09:04:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-30 09:04:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:04:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 09:05:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:05:20 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:05:20 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:05:20 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:05:20 +0000 UTC,LastTransitionTime:2023-01-30 09:04:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.185.231.33,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:87a05ebeec11f95c366dec3ebfb54572,SystemUUID:87a05ebe-ec11-f95c-366d-ec3ebfb54572,BootID:b21fbdba-5e8a-4560-8e5c-0b3f13ec273b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:135961043,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:125279033,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:57551672,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:08:21.075: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 30 09:08:21.120: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 30 09:08:21.183: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container etcd-container ready: true, restart count 2 Jan 30 09:08:21.183: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container konnectivity-server-container ready: true, restart count 1 Jan 30 09:08:21.183: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 30 09:08:21.183: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container kube-scheduler ready: true, restart count 2 Jan 30 09:08:21.183: INFO: metadata-proxy-v0.1-d2qbs started at 2023-01-30 09:04:49 +0000 UTC (0+2 container statuses recorded) Jan 30 09:08:21.183: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 09:08:21.183: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 09:08:21.183: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container etcd-container ready: true, restart count 1 Jan 30 09:08:21.183: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container kube-apiserver ready: true, restart count 0 Jan 30 09:08:21.183: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-30 09:03:44 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container kube-addon-manager ready: true, restart count 0 Jan 30 09:08:21.183: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-30 09:03:44 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.183: INFO: Container l7-lb-controller ready: true, restart count 4 Jan 30 09:08:21.374: INFO: Latency metrics for node bootstrap-e2e-master Jan 30 09:08:21.374: INFO: Logging node info for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:21.417: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7cr1 059c215f-20bf-4d2a-9d08-dd76e71cd121 697 0 2023-01-30 09:04:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7cr1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-30 09:04:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:04:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 09:04:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-7cr1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:18 +0000 UTC,LastTransitionTime:2023-01-30 09:04:17 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:04:44 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:04:44 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:04:44 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:04:44 +0000 UTC,LastTransitionTime:2023-01-30 09:04:15 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.80.94,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7cr1.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7cr1.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:152f308b39d31a7b07927ba8747dc4e6,SystemUUID:152f308b-39d3-1a7b-0792-7ba8747dc4e6,BootID:4059308d-f9aa-4ce2-864c-66c4173d2fb3,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:08:21.417: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:21.461: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:21.522: INFO: kube-proxy-bootstrap-e2e-minion-group-7cr1 started at 2023-01-30 09:04:13 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.522: INFO: Container kube-proxy ready: true, restart count 2 Jan 30 09:08:21.522: INFO: metadata-proxy-v0.1-f6lhm started at 2023-01-30 09:04:15 +0000 UTC (0+2 container statuses recorded) Jan 30 09:08:21.522: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 09:08:21.522: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 09:08:21.522: INFO: konnectivity-agent-b8sc4 started at 2023-01-30 09:04:29 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.522: INFO: Container konnectivity-agent ready: true, restart count 2 Jan 30 09:08:21.690: INFO: Latency metrics for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:21.690: INFO: Logging node info for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:21.732: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ctd3 1fc63985-9867-4666-aa14-c3224e06ef55 727 0 2023-01-30 09:04:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ctd3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:04:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-30 09:04:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:04:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 09:05:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-ctd3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:17 +0000 UTC,LastTransitionTime:2023-01-30 09:04:16 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:05:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:12 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:05:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:12 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:05:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:12 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:05:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.47.9,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ctd3.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ctd3.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:589c645a71700ad5ee732b565ea0a6c2,SystemUUID:589c645a-7170-0ad5-ee73-2b565ea0a6c2,BootID:26986229-89df-4d64-a832-0aafc321f0d5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:08:21.732: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:21.776: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:21.843: INFO: kube-proxy-bootstrap-e2e-minion-group-ctd3 started at 2023-01-30 09:04:13 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.843: INFO: Container kube-proxy ready: false, restart count 2 Jan 30 09:08:21.843: INFO: metadata-proxy-v0.1-hb8pr started at 2023-01-30 09:04:13 +0000 UTC (0+2 container statuses recorded) Jan 30 09:08:21.843: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 09:08:21.843: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 09:08:21.843: INFO: konnectivity-agent-skfnx started at 2023-01-30 09:04:29 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:21.843: INFO: Container konnectivity-agent ready: true, restart count 2 Jan 30 09:08:21.843: INFO: metrics-server-v0.5.2-867b8754b9-q4757 started at 2023-01-30 09:04:40 +0000 UTC (0+2 container statuses recorded) Jan 30 09:08:21.843: INFO: Container metrics-server ready: true, restart count 2 Jan 30 09:08:21.843: INFO: Container metrics-server-nanny ready: true, restart count 2 Jan 30 09:08:22.015: INFO: Latency metrics for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:22.015: INFO: Logging node info for node bootstrap-e2e-minion-group-hx8v Jan 30 09:08:22.058: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-hx8v 2cf6f8aa-df64-4aca-ac1b-6cbf533da69a 686 0 2023-01-30 09:04:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-hx8v kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:04:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-30 09:04:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:04:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 09:04:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-hx8v,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:04:14 +0000 UTC,LastTransitionTime:2023-01-30 09:04:13 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:15 +0000 UTC,LastTransitionTime:2023-01-30 09:04:15 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:04:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:04:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:04:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:04:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.2.148,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-hx8v.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-hx8v.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:be2afed7762cfdb54d3ec5133fceeff6,SystemUUID:be2afed7-762c-fdb5-4d3e-c5133fceeff6,BootID:94ca5323-3f83-4283-9c8c-7af34b0fc368,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:08:22.059: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hx8v Jan 30 09:08:22.109: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hx8v Jan 30 09:08:22.170: INFO: l7-default-backend-8549d69d99-fq84f started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container default-http-backend ready: true, restart count 1 Jan 30 09:08:22.170: INFO: volume-snapshot-controller-0 started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container volume-snapshot-controller ready: true, restart count 3 Jan 30 09:08:22.170: INFO: konnectivity-agent-rj7fc started at 2023-01-30 09:04:16 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container konnectivity-agent ready: true, restart count 2 Jan 30 09:08:22.170: INFO: kube-proxy-bootstrap-e2e-minion-group-hx8v started at 2023-01-30 09:04:09 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container kube-proxy ready: false, restart count 2 Jan 30 09:08:22.170: INFO: coredns-6846b5b5f-w57z6 started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container coredns ready: false, restart count 2 Jan 30 09:08:22.170: INFO: coredns-6846b5b5f-5d7s9 started at 2023-01-30 09:04:22 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container coredns ready: false, restart count 2 Jan 30 09:08:22.170: INFO: metadata-proxy-v0.1-ljgk8 started at 2023-01-30 09:04:10 +0000 UTC (0+2 container statuses recorded) Jan 30 09:08:22.170: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 09:08:22.170: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 09:08:22.170: INFO: kube-dns-autoscaler-5f6455f985-xdrbh started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:08:22.170: INFO: Container autoscaler ready: true, restart count 0 Jan 30 09:08:22.343: INFO: Latency metrics for node bootstrap-e2e-minion-group-hx8v END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 09:08:22.343 (1.439s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 09:08:22.344 (1.439s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 09:08:22.344 STEP: Destroying namespace "reboot-771" for this suite. - test/e2e/framework/framework.go:347 @ 01/30/23 09:08:22.344 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 09:08:22.389 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 09:08:22.389 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 09:08:22.389 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 09:11:51.571from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 09:08:22.476 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 09:08:22.476 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 09:08:22.476 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 09:08:22.476 Jan 30 09:08:22.476: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 09:08:22.477 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 09:08:22.605 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 09:08:22.686 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 09:08:22.789 (313ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 09:08:22.789 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 09:08:22.789 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/30/23 09:08:22.789 Jan 30 09:08:22.884: INFO: Getting bootstrap-e2e-minion-group-hx8v Jan 30 09:08:22.884: INFO: Getting bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:22.884: INFO: Getting bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:22.929: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7cr1 condition Ready to be true Jan 30 09:08:22.929: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hx8v condition Ready to be true Jan 30 09:08:22.929: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be true Jan 30 09:08:22.974: INFO: Node bootstrap-e2e-minion-group-7cr1 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-7cr1 metadata-proxy-v0.1-f6lhm] Jan 30 09:08:22.974: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-7cr1 metadata-proxy-v0.1-f6lhm] Jan 30 09:08:22.974: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-f6lhm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Node bootstrap-e2e-minion-group-hx8v has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Node bootstrap-e2e-minion-group-ctd3 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hb8pr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7cr1" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-xdrbh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ljgk8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:23.020: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 45.71084ms Jan 30 09:08:23.020: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 30 09:08:23.022: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7cr1": Phase="Running", Reason="", readiness=true. Elapsed: 47.554397ms Jan 30 09:08:23.022: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7cr1" satisfied condition "running and ready, or succeeded" Jan 30 09:08:23.023: INFO: Pod "metadata-proxy-v0.1-f6lhm": Phase="Running", Reason="", readiness=true. Elapsed: 48.22884ms Jan 30 09:08:23.023: INFO: Pod "metadata-proxy-v0.1-f6lhm" satisfied condition "running and ready, or succeeded" Jan 30 09:08:23.023: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-7cr1 metadata-proxy-v0.1-f6lhm] Jan 30 09:08:23.023: INFO: Getting external IP address for bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:23.023: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-7cr1(34.82.80.94:22) Jan 30 09:08:23.023: INFO: Pod "metadata-proxy-v0.1-hb8pr": Phase="Running", Reason="", readiness=true. Elapsed: 48.256394ms Jan 30 09:08:23.023: INFO: Pod "metadata-proxy-v0.1-hb8pr" satisfied condition "running and ready, or succeeded" Jan 30 09:08:23.025: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh": Phase="Running", Reason="", readiness=true. Elapsed: 49.873325ms Jan 30 09:08:23.025: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh" satisfied condition "running and ready, or succeeded" Jan 30 09:08:23.026: INFO: Pod "metadata-proxy-v0.1-ljgk8": Phase="Running", Reason="", readiness=true. Elapsed: 51.166431ms Jan 30 09:08:23.026: INFO: Pod "metadata-proxy-v0.1-ljgk8" satisfied condition "running and ready, or succeeded" Jan 30 09:08:23.026: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 51.192983ms Jan 30 09:08:23.026: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 51.313773ms Jan 30 09:08:23.026: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:23.026: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:23.543: INFO: ssh prow@34.82.80.94:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 30 09:08:23.543: INFO: ssh prow@34.82.80.94:22: stdout: "" Jan 30 09:08:23.543: INFO: ssh prow@34.82.80.94:22: stderr: "" Jan 30 09:08:23.543: INFO: ssh prow@34.82.80.94:22: exit code: 0 Jan 30 09:08:23.543: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-7cr1 condition Ready to be false Jan 30 09:08:23.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:25.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 2.094988592s Jan 30 09:08:25.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 2.095128689s Jan 30 09:08:25.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:25.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:25.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:27.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 4.094865847s Jan 30 09:08:27.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 4.094740655s Jan 30 09:08:27.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:27.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:27.672: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:29.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 6.095572992s Jan 30 09:08:29.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:29.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 6.095785963s Jan 30 09:08:29.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:29.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:31.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 8.095688927s Jan 30 09:08:31.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 8.095564918s Jan 30 09:08:31.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:31.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:31.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:33.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 10.096291738s Jan 30 09:08:33.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 10.096167579s Jan 30 09:08:33.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:33.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:33.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:35.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 12.0954319s Jan 30 09:08:35.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 12.095586485s Jan 30 09:08:35.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:35.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:35.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:37.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 14.095332117s Jan 30 09:08:37.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 14.095207048s Jan 30 09:08:37.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:37.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:37.887: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:39.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 16.096339998s Jan 30 09:08:39.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:39.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 16.096321249s Jan 30 09:08:39.072: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:39.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:41.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 18.095333051s Jan 30 09:08:41.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 18.095207873s Jan 30 09:08:41.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:41.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:41.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:43.074: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=true. Elapsed: 20.098675385s Jan 30 09:08:43.074: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 20.098550233s Jan 30 09:08:43.074: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" satisfied condition "running and ready, or succeeded" Jan 30 09:08:43.074: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:08:43.074: INFO: Getting external IP address for bootstrap-e2e-minion-group-hx8v Jan 30 09:08:43.074: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-hx8v(34.127.2.148:22) Jan 30 09:08:43.074: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:43.595: INFO: ssh prow@34.127.2.148:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 30 09:08:43.595: INFO: ssh prow@34.127.2.148:22: stdout: "" Jan 30 09:08:43.595: INFO: ssh prow@34.127.2.148:22: stderr: "" Jan 30 09:08:43.595: INFO: ssh prow@34.127.2.148:22: exit code: 0 Jan 30 09:08:43.595: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-hx8v condition Ready to be false Jan 30 09:08:43.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:44.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:45.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 22.094549293s Jan 30 09:08:45.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:45.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:46.062: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:47.069: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 24.094213464s Jan 30 09:08:47.069: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:47.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:48.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:49.069: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 26.09421921s Jan 30 09:08:49.069: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:49.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:50.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:51.069: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=true. Elapsed: 28.093885904s Jan 30 09:08:51.069: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" satisfied condition "running and ready, or succeeded" Jan 30 09:08:51.069: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:08:51.069: INFO: Getting external IP address for bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:51.069: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-ctd3(35.197.47.9:22) Jan 30 09:08:51.591: INFO: ssh prow@35.197.47.9:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 30 09:08:51.591: INFO: ssh prow@35.197.47.9:22: stdout: "" Jan 30 09:08:51.591: INFO: ssh prow@35.197.47.9:22: stderr: "" Jan 30 09:08:51.591: INFO: ssh prow@35.197.47.9:22: exit code: 0 Jan 30 09:08:51.591: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be false Jan 30 09:08:51.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:51.808: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:52.193: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:53.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:53.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:54.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:55.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:55.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:56.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:57.763: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:57.940: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:58.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:59.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:59.983: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:00.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:01.851: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:02.027: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:02.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:03.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:04.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:04.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:05.937: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:06.122: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:06.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:07.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:08.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:08.541: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:10.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:10.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:10.584: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:12.066: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:12.255: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:12.628: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:14.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:14.299: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:14.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:16.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:16.341: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:16.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:18.196: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:18.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:18.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:20.240: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:20.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:20.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:22.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:22.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:22.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:24.334: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:24.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:24.892: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:26.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:26.579: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:26.934: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:28.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:28.622: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:28.977: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:30.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:30.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:31.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:32.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:32.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:33.065: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:34.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:34.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:35.108: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:36.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:36.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:37.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:38.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:38.836: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:39.194: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:40.679: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:40.879: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:41.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:42.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:42.923: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:43.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:44.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:44.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:45.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:46.811: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:47.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:47.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:48.854: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:49.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:49.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:50.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:51.096: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:51.467: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:52.940: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:53.139: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:53.511: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:54.983: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:55.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:55.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:57.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:57.225: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:57.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:59.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:59.270: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:59.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:01.112: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:01.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:01.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:03.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:03.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:03.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:05.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:05.400: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:05.771: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:07.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:07.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:07.815: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:09.291: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:09.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:09.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:11.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:11.531: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:11.901: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:13.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:13.575: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:13.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:15.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:15.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:15.989: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:17.468: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:17.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:18.033: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:19.511: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:19.704: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:20.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:21.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:21.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:22.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:23.597: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:23.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:24.121: INFO: Node bootstrap-e2e-minion-group-7cr1 didn't reach desired Ready condition status (false) within 2m0s Jan 30 09:10:25.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:25.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:27.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:27.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:29.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:29.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:31.771: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:31.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:33.814: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:34.008: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:35.857: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be true Jan 30 09:10:35.899: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:36.052: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-hx8v condition Ready to be true Jan 30 09:10:36.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:37.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:38.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:39.985: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:40.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:42.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:42.226: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:44.075: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:44.271: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:46.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:46.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:48.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:48.358: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:50.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:50.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:52.256: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:52.445: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:54.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:54.489: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:56.341: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:56.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:58.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:58.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:00.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:00.620: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:02.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:02.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:04.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:04.710: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:06.557: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:06.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:08.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:08.796: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:10.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:10.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:12.687: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:12.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:14.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:14.929: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:16.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:16.973: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:18.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:19.017: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:20.861: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:21.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:22.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:23.104: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:24.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:25.147: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:26.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:27.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:29.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:29.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:31.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:31.279: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:29 +0000 UTC}]. Failure Jan 30 09:11:33.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:33.322: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:29 +0000 UTC}]. Failure Jan 30 09:11:35.166: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:35.365: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:29 +0000 UTC}]. Failure Jan 30 09:11:37.210: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:37.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:29 +0000 UTC}]. Failure Jan 30 09:11:39.253: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:39.474: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:29 +0000 UTC}]. Failure Jan 30 09:11:41.297: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:40 +0000 UTC}]. Failure Jan 30 09:11:41.517: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:11:41.517: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:11:41.517: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-xdrbh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:11:41.517: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:11:41.517: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ljgk8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:11:41.613: INFO: Pod "metadata-proxy-v0.1-ljgk8": Phase="Running", Reason="", readiness=false. Elapsed: 95.685183ms Jan 30 09:11:41.613: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ljgk8' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:10:34 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:11:38 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:11:41.613: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh": Phase="Running", Reason="", readiness=false. Elapsed: 95.945621ms Jan 30 09:11:41.613: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-xdrbh' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:10:34 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:11:38 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:11:41.614: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 97.365519ms Jan 30 09:11:41.614: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:43 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:43 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:11:41.614: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 97.28134ms Jan 30 09:11:41.614: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:10:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:11:43.341: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:40 +0000 UTC}]. Failure Jan 30 09:11:43.658: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh": Phase="Running", Reason="", readiness=false. Elapsed: 2.140892423s Jan 30 09:11:43.658: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-xdrbh' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:10:34 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:11:38 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:11:43.659: INFO: Pod "metadata-proxy-v0.1-ljgk8": Phase="Running", Reason="", readiness=true. Elapsed: 2.142406366s Jan 30 09:11:43.659: INFO: Pod "metadata-proxy-v0.1-ljgk8" satisfied condition "running and ready, or succeeded" Jan 30 09:11:43.659: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.142795316s Jan 30 09:11:43.659: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:43 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:43 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:11:43.659: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=true. Elapsed: 2.142604382s Jan 30 09:11:43.659: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" satisfied condition "running and ready, or succeeded" Jan 30 09:11:45.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:40 +0000 UTC}]. Failure Jan 30 09:11:45.657: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh": Phase="Running", Reason="", readiness=true. Elapsed: 4.140055485s Jan 30 09:11:45.657: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh" satisfied condition "running and ready, or succeeded" Jan 30 09:11:45.657: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 4.140237483s Jan 30 09:11:45.657: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 30 09:11:45.657: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:11:45.657: INFO: Reboot successful on node bootstrap-e2e-minion-group-hx8v Jan 30 09:11:47.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:40 +0000 UTC}]. Failure Jan 30 09:11:49.481: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:40 +0000 UTC}]. Failure Jan 30 09:11:51.527: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:11:51.527: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hb8pr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:11:51.527: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:11:51.571: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=true. Elapsed: 43.522166ms Jan 30 09:11:51.571: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" satisfied condition "running and ready, or succeeded" Jan 30 09:11:51.571: INFO: Pod "metadata-proxy-v0.1-hb8pr": Phase="Running", Reason="", readiness=true. Elapsed: 43.596333ms Jan 30 09:11:51.571: INFO: Pod "metadata-proxy-v0.1-hb8pr" satisfied condition "running and ready, or succeeded" Jan 30 09:11:51.571: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:11:51.571: INFO: Reboot successful on node bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:51.571: INFO: Node bootstrap-e2e-minion-group-7cr1 failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 09:11:51.571 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/30/23 09:11:51.571 (3m28.782s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 09:11:51.571 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 09:11:51.571 Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-5d7s9 to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.12:8181/ready": dial tcp 10.64.0.12:8181: connect: connection refused Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5d7s9_kube-system(7bd270c5-f2ec-4a85-9058-86135914ebab) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.15:8181/ready": dial tcp 10.64.0.15:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.15:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-5d7s9 Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5d7s9_kube-system(7bd270c5-f2ec-4a85-9058-86135914ebab) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-w57z6 to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.519322897s (3.519341369s including waiting) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.4:8181/ready": dial tcp 10.64.0.4:8181: connect: connection refused Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.10:8181/ready": dial tcp 10.64.0.10:8181: connect: connection refused Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-w57z6_kube-system(1e79e82a-e647-48da-a4fd-05ad6d505eef) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.14:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-w57z6 Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.23:8181/ready": dial tcp 10.64.0.23:8181: connect: connection refused Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-w57z6_kube-system(1e79e82a-e647-48da-a4fd-05ad6d505eef) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-w57z6 Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-5d7s9 Jan 30 09:11:51.625: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 30 09:11:51.625: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 30 09:11:51.625: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 09:11:51.625: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 09:11:51.625: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 09:11:51.625: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 30 09:11:51.625: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 09:11:51.625: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_94038 became leader Jan 30 09:11:51.625: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_32bba became leader Jan 30 09:11:51.625: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b3e6 became leader Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-b8sc4 to bootstrap-e2e-minion-group-7cr1 Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 629.593814ms (629.614416ms including waiting) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.3:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Failed: Error: failed to get sandbox container task: no running task found: task 1d9c817ce846f529aa76391072c1a7fd56a9f47957fc17a2690b2671de27ff84 not found: not found Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b8sc4_kube-system(f6d868e6-1c3b-43a3-ad9d-01a41c072da7) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b8sc4_kube-system(f6d868e6-1c3b-43a3-ad9d-01a41c072da7) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rj7fc to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.815723669s (1.81573909s including waiting) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.7:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Failed: Error: failed to get sandbox container task: no running task found: task 11f3dcad8b3972dd50b4e21b10c349a64def00d0106a07d500fcf4637de4bd0d not found: not found Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.17:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rj7fc_kube-system(d1e6165b-b63d-4023-904f-a42ff691e8ae) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-skfnx to bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 625.155725ms (625.171974ms including waiting) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rj7fc Jan 30 09:11:51.625: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-b8sc4 Jan 30 09:11:51.625: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-skfnx Jan 30 09:11:51.625: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 30 09:11:51.625: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 30 09:11:51.625: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 30 09:11:51.625: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 30 09:11:51.625: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 30 09:11:51.625: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 30 09:11:51.625: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 30 09:11:51.625: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 30 09:11:51.625: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:11:51.625: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 30 09:11:51.625: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 30 09:11:51.625: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(4fc5a5aeac3c203e3876adb08d878c93) Jan 30 09:11:51.625: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 30 09:11:51.625: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ce182e7b-00b7-4169-8624-f53196308681 became leader Jan 30 09:11:51.625: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_274fd4ca-797b-43c2-b1b6-f36d9e36c2e7 became leader Jan 30 09:11:51.625: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_4b65a89d-ba5e-49e4-8048-9ec50f56a58a became leader Jan 30 09:11:51.625: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c0831107-4ce4-4c38-b5d9-9a3dd92f107b became leader Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-xdrbh to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.161986755s (3.162044961s including waiting) Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container autoscaler Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container autoscaler Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-xdrbh Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container autoscaler Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container autoscaler Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container autoscaler Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-xdrbh Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 30 09:11:51.625: INFO: event for kube-dns: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7cr1_kube-system(dd1d9c1acf429448066a68f4147cfb77) Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ctd3_kube-system(f92a9aed872df1bead32b1c0dd213385) Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hx8v_kube-system(acb97e253f2500aa0581d024a2217293) Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(5b3c0a3dad3d723f9e5778ab0a62849c) Jan 30 09:11:51.625: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_34628f7a-9073-4ee1-9bb3-51be47583fdb became leader Jan 30 09:11:51.625: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_914f6c8b-8db8-44f8-a433-b4e094f84179 became leader Jan 30 09:11:51.625: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_7f5e92cb-6a3a-45d0-be98-a7453645cadf became leader Jan 30 09:11:51.625: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_2e191714-4d71-4061-b14a-06b3d43bf967 became leader Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-fq84f to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.812003002s (1.812012686s including waiting) Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container default-http-backend Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container default-http-backend Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for l7-default-backend-8549d69d99-fq84f: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-fq84f Jan 30 09:11:51.626: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 09:11:51.626: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container default-http-backend Jan 30 09:11:51.626: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-fq84f Jan 30 09:11:51.626: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 30 09:11:51.626: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 30 09:11:51.626: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 30 09:11:51.626: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 30 09:11:51.626: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 30 09:11:51.626: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-d2qbs to bootstrap-e2e-master Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 767.072137ms (767.083529ms including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.898844974s (1.898853058s including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-f6lhm to bootstrap-e2e-minion-group-7cr1 Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 760.69768ms (760.732368ms including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.890232448s (1.890241652s including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-hb8pr to bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 790.982977ms (791.000841ms including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.062535103s (2.062546601s including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-ljgk8 to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 732.378395ms (732.411068ms including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.82877905s (1.828788865s including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-ljgk8 Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-d2qbs Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-hb8pr Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-f6lhm Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-v25xc to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.900251051s (3.900291297s including waiting) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.359025956s (3.35903606s including waiting) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metrics-server-nanny Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metrics-server-nanny Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container metrics-server-nanny Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-v25xc Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-v25xc Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-q4757 to bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.428302629s (1.428313025s including waiting) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.023447114s (1.023460341s including waiting) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server-nanny Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server-nanny Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-q4757 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-q4757 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.510528775s (3.510537487s including waiting) Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container volume-snapshot-controller Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container volume-snapshot-controller Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container volume-snapshot-controller Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(c2d42366-14d4-4e0b-bcd7-a6055ffe56f2) Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container volume-snapshot-controller Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container volume-snapshot-controller Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container volume-snapshot-controller Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 09:11:51.626 (54ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 09:11:51.626 Jan 30 09:11:51.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 09:11:51.68 (54ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 09:11:51.68 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 09:11:51.68 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 09:11:51.68 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 09:11:51.68 STEP: Collecting events from namespace "reboot-7046". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 09:11:51.68 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/30/23 09:11:51.75 Jan 30 09:11:51.791: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 09:11:51.791: INFO: Jan 30 09:11:51.834: INFO: Logging node info for node bootstrap-e2e-master Jan 30 09:11:51.876: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master a34af008-0528-47e4-a6c5-cd39d827847f 1307 0 2023-01-30 09:04:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-30 09:04:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:04:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 09:10:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:10:27 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:10:27 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:10:27 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:10:27 +0000 UTC,LastTransitionTime:2023-01-30 09:04:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.185.231.33,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:87a05ebeec11f95c366dec3ebfb54572,SystemUUID:87a05ebe-ec11-f95c-366d-ec3ebfb54572,BootID:b21fbdba-5e8a-4560-8e5c-0b3f13ec273b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:135961043,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:125279033,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:57551672,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:11:51.876: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 30 09:11:51.922: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 30 09:11:51.984: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container etcd-container ready: true, restart count 2 Jan 30 09:11:51.984: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 30 09:11:51.984: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 30 09:11:51.984: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container kube-scheduler ready: true, restart count 3 Jan 30 09:11:51.984: INFO: metadata-proxy-v0.1-d2qbs started at 2023-01-30 09:04:49 +0000 UTC (0+2 container statuses recorded) Jan 30 09:11:51.984: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 09:11:51.984: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 09:11:51.984: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container etcd-container ready: true, restart count 1 Jan 30 09:11:51.984: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container kube-apiserver ready: true, restart count 0 Jan 30 09:11:51.984: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-30 09:03:44 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container kube-addon-manager ready: true, restart count 1 Jan 30 09:11:51.984: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-30 09:03:44 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container l7-lb-controller ready: false, restart count 4 Jan 30 09:11:52.160: INFO: Latency metrics for node bootstrap-e2e-master Jan 30 09:11:52.160: INFO: Logging node info for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:11:52.202: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7cr1 059c215f-20bf-4d2a-9d08-dd76e71cd121 1435 0 2023-01-30 09:04:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7cr1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:10:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:11:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 09:11:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-30 09:11:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-7cr1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.80.94,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7cr1.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7cr1.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:152f308b39d31a7b07927ba8747dc4e6,SystemUUID:152f308b-39d3-1a7b-0792-7ba8747dc4e6,BootID:041305a0-41ee-4de6-8a8c-5f412e8332bd,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:11:52.203: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:11:52.248: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7cr1 Jan 30 09:11:52.355: INFO: kube-proxy-bootstrap-e2e-minion-group-7cr1 started at 2023-01-30 09:04:13 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:52.355: INFO: Container kube-proxy ready: false, restart count 3 Jan 30 09:11:52.355: INFO: metadata-proxy-v0.1-f6lhm started at 2023-01-30 09:04:15 +0000 UTC (0+2 container statuses recorded) Jan 30 09:11:52.355: INFO: Container metadata-proxy ready: true, restart count 1 Jan 30 09:11:52.355: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 30 09:11:52.355: INFO: konnectivity-agent-b8sc4 started at 2023-01-30 09:04:29 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:52.355: INFO: Container konnectivity-agent ready: true, restart count 4 Jan 30 09:11:52.514: INFO: Latency metrics for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:11:52.514: INFO: Logging node info for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:52.556: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ctd3 1fc63985-9867-4666-aa14-c3224e06ef55 1567 0 2023-01-30 09:04:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ctd3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:10:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-30 09:11:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:11:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 09:11:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-ctd3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:47 +0000 UTC,LastTransitionTime:2023-01-30 09:11:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:47 +0000 UTC,LastTransitionTime:2023-01-30 09:11:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:47 +0000 UTC,LastTransitionTime:2023-01-30 09:11:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:11:47 +0000 UTC,LastTransitionTime:2023-01-30 09:11:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.47.9,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ctd3.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ctd3.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:589c645a71700ad5ee732b565ea0a6c2,SystemUUID:589c645a-7170-0ad5-ee73-2b565ea0a6c2,BootID:e2652c78-7ec4-47ff-a9f6-a32d90c22cce,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:11:52.556: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:52.602: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:52.665: INFO: metrics-server-v0.5.2-867b8754b9-q4757 started at 2023-01-30 09:04:40 +0000 UTC (0+2 container statuses recorded) Jan 30 09:11:52.665: INFO: Container metrics-server ready: false, restart count 2 Jan 30 09:11:52.665: INFO: Container metrics-server-nanny ready: false, restart count 2 Jan 30 09:11:52.665: INFO: kube-proxy-bootstrap-e2e-minion-group-ctd3 started at 2023-01-30 09:04:13 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:52.665: INFO: Container kube-proxy ready: true, restart count 4 Jan 30 09:11:52.665: INFO: metadata-proxy-v0.1-hb8pr started at 2023-01-30 09:04:13 +0000 UTC (0+2 container statuses recorded) Jan 30 09:11:52.665: INFO: Container metadata-proxy ready: true, restart count 1 Jan 30 09:11:52.665: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 30 09:11:52.665: INFO: konnectivity-agent-skfnx started at 2023-01-30 09:04:29 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:52.665: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 30 09:11:58.252: INFO: Latency metrics for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:58.252: INFO: Logging node info for node bootstrap-e2e-minion-group-hx8v Jan 30 09:11:58.295: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-hx8v 2cf6f8aa-df64-4aca-ac1b-6cbf533da69a 1494 0 2023-01-30 09:04:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-hx8v kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:10:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:11:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 09:11:37 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-30 09:11:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-hx8v,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:15 +0000 UTC,LastTransitionTime:2023-01-30 09:04:15 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.2.148,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-hx8v.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-hx8v.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:be2afed7762cfdb54d3ec5133fceeff6,SystemUUID:be2afed7-762c-fdb5-4d3e-c5133fceeff6,BootID:652b1023-b7e9-4b95-a539-4c93e8cda554,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:11:58.296: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hx8v Jan 30 09:11:58.341: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hx8v Jan 30 09:11:58.407: INFO: l7-default-backend-8549d69d99-fq84f started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container default-http-backend ready: false, restart count 1 Jan 30 09:11:58.408: INFO: kube-dns-autoscaler-5f6455f985-xdrbh started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container autoscaler ready: true, restart count 1 Jan 30 09:11:58.408: INFO: volume-snapshot-controller-0 started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container volume-snapshot-controller ready: true, restart count 4 Jan 30 09:11:58.408: INFO: coredns-6846b5b5f-w57z6 started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container coredns ready: false, restart count 4 Jan 30 09:11:58.408: INFO: metadata-proxy-v0.1-ljgk8 started at 2023-01-30 09:04:10 +0000 UTC (0+2 container statuses recorded) Jan 30 09:11:58.408: INFO: Container metadata-proxy ready: true, restart count 1 Jan 30 09:11:58.408: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 30 09:11:58.408: INFO: konnectivity-agent-rj7fc started at 2023-01-30 09:04:16 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container konnectivity-agent ready: true, restart count 4 Jan 30 09:11:58.408: INFO: coredns-6846b5b5f-5d7s9 started at 2023-01-30 09:04:22 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container coredns ready: false, restart count 3 Jan 30 09:11:58.408: INFO: kube-proxy-bootstrap-e2e-minion-group-hx8v started at 2023-01-30 09:04:09 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container kube-proxy ready: true, restart count 4 Jan 30 09:12:22.910: INFO: Latency metrics for node bootstrap-e2e-minion-group-hx8v END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 09:12:22.91 (31.23s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 09:12:22.91 (31.23s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 09:12:22.91 STEP: Destroying namespace "reboot-7046" for this suite. - test/e2e/framework/framework.go:347 @ 01/30/23 09:12:22.91 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 09:12:22.957 (47ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 09:12:22.957 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 09:12:22.957 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 09:11:51.571from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 09:08:22.476 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 09:08:22.476 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 09:08:22.476 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 09:08:22.476 Jan 30 09:08:22.476: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 09:08:22.477 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 09:08:22.605 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 09:08:22.686 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 09:08:22.789 (313ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 09:08:22.789 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 09:08:22.789 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/30/23 09:08:22.789 Jan 30 09:08:22.884: INFO: Getting bootstrap-e2e-minion-group-hx8v Jan 30 09:08:22.884: INFO: Getting bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:22.884: INFO: Getting bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:22.929: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7cr1 condition Ready to be true Jan 30 09:08:22.929: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hx8v condition Ready to be true Jan 30 09:08:22.929: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be true Jan 30 09:08:22.974: INFO: Node bootstrap-e2e-minion-group-7cr1 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-7cr1 metadata-proxy-v0.1-f6lhm] Jan 30 09:08:22.974: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-7cr1 metadata-proxy-v0.1-f6lhm] Jan 30 09:08:22.974: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-f6lhm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Node bootstrap-e2e-minion-group-hx8v has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Node bootstrap-e2e-minion-group-ctd3 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hb8pr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7cr1" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-xdrbh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ljgk8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:22.975: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:08:23.020: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 45.71084ms Jan 30 09:08:23.020: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 30 09:08:23.022: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7cr1": Phase="Running", Reason="", readiness=true. Elapsed: 47.554397ms Jan 30 09:08:23.022: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7cr1" satisfied condition "running and ready, or succeeded" Jan 30 09:08:23.023: INFO: Pod "metadata-proxy-v0.1-f6lhm": Phase="Running", Reason="", readiness=true. Elapsed: 48.22884ms Jan 30 09:08:23.023: INFO: Pod "metadata-proxy-v0.1-f6lhm" satisfied condition "running and ready, or succeeded" Jan 30 09:08:23.023: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-7cr1 metadata-proxy-v0.1-f6lhm] Jan 30 09:08:23.023: INFO: Getting external IP address for bootstrap-e2e-minion-group-7cr1 Jan 30 09:08:23.023: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-7cr1(34.82.80.94:22) Jan 30 09:08:23.023: INFO: Pod "metadata-proxy-v0.1-hb8pr": Phase="Running", Reason="", readiness=true. Elapsed: 48.256394ms Jan 30 09:08:23.023: INFO: Pod "metadata-proxy-v0.1-hb8pr" satisfied condition "running and ready, or succeeded" Jan 30 09:08:23.025: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh": Phase="Running", Reason="", readiness=true. Elapsed: 49.873325ms Jan 30 09:08:23.025: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh" satisfied condition "running and ready, or succeeded" Jan 30 09:08:23.026: INFO: Pod "metadata-proxy-v0.1-ljgk8": Phase="Running", Reason="", readiness=true. Elapsed: 51.166431ms Jan 30 09:08:23.026: INFO: Pod "metadata-proxy-v0.1-ljgk8" satisfied condition "running and ready, or succeeded" Jan 30 09:08:23.026: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 51.192983ms Jan 30 09:08:23.026: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 51.313773ms Jan 30 09:08:23.026: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:23.026: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:23.543: INFO: ssh prow@34.82.80.94:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 30 09:08:23.543: INFO: ssh prow@34.82.80.94:22: stdout: "" Jan 30 09:08:23.543: INFO: ssh prow@34.82.80.94:22: stderr: "" Jan 30 09:08:23.543: INFO: ssh prow@34.82.80.94:22: exit code: 0 Jan 30 09:08:23.543: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-7cr1 condition Ready to be false Jan 30 09:08:23.586: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:25.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 2.094988592s Jan 30 09:08:25.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 2.095128689s Jan 30 09:08:25.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:25.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:25.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:27.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 4.094865847s Jan 30 09:08:27.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 4.094740655s Jan 30 09:08:27.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:27.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:27.672: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:29.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 6.095572992s Jan 30 09:08:29.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:29.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 6.095785963s Jan 30 09:08:29.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:29.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:31.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 8.095688927s Jan 30 09:08:31.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 8.095564918s Jan 30 09:08:31.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:31.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:31.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:33.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 10.096291738s Jan 30 09:08:33.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 10.096167579s Jan 30 09:08:33.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:33.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:33.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:35.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 12.0954319s Jan 30 09:08:35.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 12.095586485s Jan 30 09:08:35.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:35.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:35.844: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:37.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 14.095332117s Jan 30 09:08:37.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 14.095207048s Jan 30 09:08:37.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:37.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:37.887: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:39.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 16.096339998s Jan 30 09:08:39.071: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:39.071: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 16.096321249s Jan 30 09:08:39.072: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:39.931: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:41.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 18.095333051s Jan 30 09:08:41.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 18.095207873s Jan 30 09:08:41.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:11 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:08:41.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:41.974: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:43.074: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=true. Elapsed: 20.098675385s Jan 30 09:08:43.074: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 20.098550233s Jan 30 09:08:43.074: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" satisfied condition "running and ready, or succeeded" Jan 30 09:08:43.074: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:08:43.074: INFO: Getting external IP address for bootstrap-e2e-minion-group-hx8v Jan 30 09:08:43.074: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-hx8v(34.127.2.148:22) Jan 30 09:08:43.074: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:43.595: INFO: ssh prow@34.127.2.148:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 30 09:08:43.595: INFO: ssh prow@34.127.2.148:22: stdout: "" Jan 30 09:08:43.595: INFO: ssh prow@34.127.2.148:22: stderr: "" Jan 30 09:08:43.595: INFO: ssh prow@34.127.2.148:22: exit code: 0 Jan 30 09:08:43.595: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-hx8v condition Ready to be false Jan 30 09:08:43.638: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:44.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:45.070: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 22.094549293s Jan 30 09:08:45.070: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:45.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:46.062: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:47.069: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 24.094213464s Jan 30 09:08:47.069: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:47.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:48.107: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:49.069: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=false. Elapsed: 26.09421921s Jan 30 09:08:49.069: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-ctd3' on 'bootstrap-e2e-minion-group-ctd3' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:19 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:13 +0000 UTC }] Jan 30 09:08:49.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:50.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:51.069: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=true. Elapsed: 28.093885904s Jan 30 09:08:51.069: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" satisfied condition "running and ready, or succeeded" Jan 30 09:08:51.069: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:08:51.069: INFO: Getting external IP address for bootstrap-e2e-minion-group-ctd3 Jan 30 09:08:51.069: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-ctd3(35.197.47.9:22) Jan 30 09:08:51.591: INFO: ssh prow@35.197.47.9:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 30 09:08:51.591: INFO: ssh prow@35.197.47.9:22: stdout: "" Jan 30 09:08:51.591: INFO: ssh prow@35.197.47.9:22: stderr: "" Jan 30 09:08:51.591: INFO: ssh prow@35.197.47.9:22: exit code: 0 Jan 30 09:08:51.591: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be false Jan 30 09:08:51.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:51.808: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:52.193: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:53.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:53.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:54.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:55.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:55.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:56.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:57.763: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:57.940: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:58.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:59.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:08:59.983: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:00.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:01.851: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:02.027: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:02.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:03.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:04.072: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:04.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:05.937: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:06.122: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:06.497: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:07.980: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:08.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:08.541: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:10.023: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:10.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:10.584: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:12.066: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:12.255: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:12.628: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:14.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:14.299: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:14.670: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:16.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:16.341: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:16.714: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:18.196: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:18.385: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:18.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:20.240: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:20.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:20.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:22.282: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:22.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:22.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:24.334: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:24.535: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:24.892: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:26.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:26.579: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:26.934: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:28.420: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:28.622: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:28.977: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:30.463: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:30.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:31.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:32.507: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:32.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:33.065: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:34.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:34.750: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:35.108: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:36.593: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:36.793: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:37.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:38.636: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:38.836: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:39.194: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:40.679: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:40.879: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:41.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:42.724: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:42.923: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:43.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:44.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:44.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:45.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:46.811: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:47.009: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:47.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:48.854: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:49.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:49.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:50.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:51.096: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:51.467: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:52.940: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:53.139: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:53.511: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:54.983: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:55.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:55.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:57.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:57.225: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:57.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:59.069: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:59.270: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:09:59.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:01.112: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:01.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:01.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:03.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:03.357: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:03.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:05.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:05.400: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:05.771: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:07.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:07.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:07.815: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:09.291: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:09.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:09.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:11.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:11.531: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:11.901: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:13.380: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:13.575: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:13.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:15.424: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:15.618: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:15.989: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:17.468: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:17.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:18.033: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:19.511: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:19.704: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:20.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:21.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:21.747: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:22.121: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:23.597: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:23.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:24.121: INFO: Node bootstrap-e2e-minion-group-7cr1 didn't reach desired Ready condition status (false) within 2m0s Jan 30 09:10:25.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:25.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:27.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:27.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:29.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:29.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:31.771: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:31.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:33.814: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:34.008: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:10:35.857: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be true Jan 30 09:10:35.899: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:36.052: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-hx8v condition Ready to be true Jan 30 09:10:36.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:37.943: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:38.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:39.985: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:40.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:42.029: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:42.226: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:44.075: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:44.271: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:46.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:46.314: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:48.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:48.358: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:50.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:50.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:52.256: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:52.445: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:54.298: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:54.489: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:56.341: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:56.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:58.383: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:10:58.578: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:00.426: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:00.620: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:02.471: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:02.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:04.514: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:04.710: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:06.557: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:06.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:08.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:08.796: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:10.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:10.840: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:12.687: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:12.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:14.731: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:14.929: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:16.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:16.973: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:18.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:19.017: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:20.861: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:21.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:22.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:23.104: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:24.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:25.147: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:26.993: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:27.191: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:29.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:29.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:31.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:31.279: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:29 +0000 UTC}]. Failure Jan 30 09:11:33.124: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:33.322: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:29 +0000 UTC}]. Failure Jan 30 09:11:35.166: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:35.365: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:29 +0000 UTC}]. Failure Jan 30 09:11:37.210: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:37.428: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:29 +0000 UTC}]. Failure Jan 30 09:11:39.253: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 30 09:11:39.474: INFO: Condition Ready of node bootstrap-e2e-minion-group-hx8v is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:29 +0000 UTC}]. Failure Jan 30 09:11:41.297: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:40 +0000 UTC}]. Failure Jan 30 09:11:41.517: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:11:41.517: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:11:41.517: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-xdrbh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:11:41.517: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:11:41.517: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ljgk8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:11:41.613: INFO: Pod "metadata-proxy-v0.1-ljgk8": Phase="Running", Reason="", readiness=false. Elapsed: 95.685183ms Jan 30 09:11:41.613: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-ljgk8' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:10:34 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:11:38 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:11:41.613: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh": Phase="Running", Reason="", readiness=false. Elapsed: 95.945621ms Jan 30 09:11:41.613: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-xdrbh' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:10:34 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:11:38 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:11:41.614: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 97.365519ms Jan 30 09:11:41.614: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:43 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:43 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:11:41.614: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=false. Elapsed: 97.28134ms Jan 30 09:11:41.614: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-hx8v' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:10:34 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:09 +0000 UTC }] Jan 30 09:11:43.341: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:40 +0000 UTC}]. Failure Jan 30 09:11:43.658: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh": Phase="Running", Reason="", readiness=false. Elapsed: 2.140892423s Jan 30 09:11:43.658: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-xdrbh' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:10:34 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:11:38 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:11:43.659: INFO: Pod "metadata-proxy-v0.1-ljgk8": Phase="Running", Reason="", readiness=true. Elapsed: 2.142406366s Jan 30 09:11:43.659: INFO: Pod "metadata-proxy-v0.1-ljgk8" satisfied condition "running and ready, or succeeded" Jan 30 09:11:43.659: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.142795316s Jan 30 09:11:43.659: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:43 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:08:43 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:11:43.659: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=true. Elapsed: 2.142604382s Jan 30 09:11:43.659: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" satisfied condition "running and ready, or succeeded" Jan 30 09:11:45.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-30 09:10:34 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:40 +0000 UTC}]. Failure Jan 30 09:11:45.657: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh": Phase="Running", Reason="", readiness=true. Elapsed: 4.140055485s Jan 30 09:11:45.657: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh" satisfied condition "running and ready, or succeeded" Jan 30 09:11:45.657: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 4.140237483s Jan 30 09:11:45.657: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 30 09:11:45.657: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:11:45.657: INFO: Reboot successful on node bootstrap-e2e-minion-group-hx8v Jan 30 09:11:47.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:40 +0000 UTC}]. Failure Jan 30 09:11:49.481: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-30 09:11:40 +0000 UTC}]. Failure Jan 30 09:11:51.527: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:11:51.527: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hb8pr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:11:51.527: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:11:51.571: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=true. Elapsed: 43.522166ms Jan 30 09:11:51.571: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" satisfied condition "running and ready, or succeeded" Jan 30 09:11:51.571: INFO: Pod "metadata-proxy-v0.1-hb8pr": Phase="Running", Reason="", readiness=true. Elapsed: 43.596333ms Jan 30 09:11:51.571: INFO: Pod "metadata-proxy-v0.1-hb8pr" satisfied condition "running and ready, or succeeded" Jan 30 09:11:51.571: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:11:51.571: INFO: Reboot successful on node bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:51.571: INFO: Node bootstrap-e2e-minion-group-7cr1 failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 09:11:51.571 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/30/23 09:11:51.571 (3m28.782s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 09:11:51.571 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 09:11:51.571 Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-5d7s9 to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.12:8181/ready": dial tcp 10.64.0.12:8181: connect: connection refused Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5d7s9_kube-system(7bd270c5-f2ec-4a85-9058-86135914ebab) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.15:8181/ready": dial tcp 10.64.0.15:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.15:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-5d7s9 Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5d7s9_kube-system(7bd270c5-f2ec-4a85-9058-86135914ebab) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-w57z6 to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.519322897s (3.519341369s including waiting) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.4:8181/ready": dial tcp 10.64.0.4:8181: connect: connection refused Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.10:8181/ready": dial tcp 10.64.0.10:8181: connect: connection refused Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-w57z6_kube-system(1e79e82a-e647-48da-a4fd-05ad6d505eef) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.14:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-w57z6 Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.23:8181/ready": dial tcp 10.64.0.23:8181: connect: connection refused Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-w57z6_kube-system(1e79e82a-e647-48da-a4fd-05ad6d505eef) Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-w57z6 Jan 30 09:11:51.625: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-5d7s9 Jan 30 09:11:51.625: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 30 09:11:51.625: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 09:11:51.625: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 30 09:11:51.625: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 09:11:51.625: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 09:11:51.625: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 09:11:51.625: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 30 09:11:51.625: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 09:11:51.625: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_94038 became leader Jan 30 09:11:51.625: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_32bba became leader Jan 30 09:11:51.625: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b3e6 became leader Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-b8sc4 to bootstrap-e2e-minion-group-7cr1 Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 629.593814ms (629.614416ms including waiting) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.3:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Failed: Error: failed to get sandbox container task: no running task found: task 1d9c817ce846f529aa76391072c1a7fd56a9f47957fc17a2690b2671de27ff84 not found: not found Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b8sc4_kube-system(f6d868e6-1c3b-43a3-ad9d-01a41c072da7) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b8sc4_kube-system(f6d868e6-1c3b-43a3-ad9d-01a41c072da7) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rj7fc to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.815723669s (1.81573909s including waiting) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.7:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Failed: Error: failed to get sandbox container task: no running task found: task 11f3dcad8b3972dd50b4e21b10c349a64def00d0106a07d500fcf4637de4bd0d not found: not found Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.17:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rj7fc_kube-system(d1e6165b-b63d-4023-904f-a42ff691e8ae) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-skfnx to bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 625.155725ms (625.171974ms including waiting) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container konnectivity-agent Jan 30 09:11:51.625: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rj7fc Jan 30 09:11:51.625: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-b8sc4 Jan 30 09:11:51.625: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-skfnx Jan 30 09:11:51.625: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 30 09:11:51.625: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 30 09:11:51.625: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 30 09:11:51.625: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 30 09:11:51.625: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 30 09:11:51.625: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 30 09:11:51.625: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 30 09:11:51.625: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 30 09:11:51.625: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 30 09:11:51.625: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:11:51.625: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 30 09:11:51.625: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 30 09:11:51.625: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(4fc5a5aeac3c203e3876adb08d878c93) Jan 30 09:11:51.625: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 30 09:11:51.625: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ce182e7b-00b7-4169-8624-f53196308681 became leader Jan 30 09:11:51.625: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_274fd4ca-797b-43c2-b1b6-f36d9e36c2e7 became leader Jan 30 09:11:51.625: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_4b65a89d-ba5e-49e4-8048-9ec50f56a58a became leader Jan 30 09:11:51.625: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c0831107-4ce4-4c38-b5d9-9a3dd92f107b became leader Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-xdrbh to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.161986755s (3.162044961s including waiting) Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container autoscaler Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container autoscaler Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-xdrbh Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container autoscaler Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container autoscaler Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container autoscaler Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-xdrbh Jan 30 09:11:51.625: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 30 09:11:51.625: INFO: event for kube-dns: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7cr1_kube-system(dd1d9c1acf429448066a68f4147cfb77) Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ctd3_kube-system(f92a9aed872df1bead32b1c0dd213385) Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hx8v_kube-system(acb97e253f2500aa0581d024a2217293) Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container kube-proxy Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.625: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(5b3c0a3dad3d723f9e5778ab0a62849c) Jan 30 09:11:51.625: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_34628f7a-9073-4ee1-9bb3-51be47583fdb became leader Jan 30 09:11:51.625: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_914f6c8b-8db8-44f8-a433-b4e094f84179 became leader Jan 30 09:11:51.625: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_7f5e92cb-6a3a-45d0-be98-a7453645cadf became leader Jan 30 09:11:51.625: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_2e191714-4d71-4061-b14a-06b3d43bf967 became leader Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-fq84f to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.812003002s (1.812012686s including waiting) Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container default-http-backend Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container default-http-backend Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.625: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for l7-default-backend-8549d69d99-fq84f: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-fq84f Jan 30 09:11:51.626: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 09:11:51.626: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container default-http-backend Jan 30 09:11:51.626: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-fq84f Jan 30 09:11:51.626: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 30 09:11:51.626: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 30 09:11:51.626: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 30 09:11:51.626: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 30 09:11:51.626: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 30 09:11:51.626: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-d2qbs to bootstrap-e2e-master Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 767.072137ms (767.083529ms including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.898844974s (1.898853058s including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-f6lhm to bootstrap-e2e-minion-group-7cr1 Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 760.69768ms (760.732368ms including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.890232448s (1.890241652s including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-hb8pr to bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 790.982977ms (791.000841ms including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.062535103s (2.062546601s including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-ljgk8 to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 732.378395ms (732.411068ms including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.82877905s (1.828788865s including waiting) Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metadata-proxy Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container prometheus-to-sd-exporter Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-ljgk8 Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-d2qbs Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-hb8pr Jan 30 09:11:51.626: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-f6lhm Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-v25xc to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.900251051s (3.900291297s including waiting) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.359025956s (3.35903606s including waiting) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metrics-server-nanny Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metrics-server-nanny Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container metrics-server-nanny Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-v25xc Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-v25xc Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-q4757 to bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.428302629s (1.428313025s including waiting) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.023447114s (1.023460341s including waiting) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server-nanny Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server-nanny Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {node-controller } NodeNotReady: Node is not ready Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-q4757 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-q4757 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 30 09:11:51.626: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-hx8v Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.510528775s (3.510537487s including waiting) Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container volume-snapshot-controller Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container volume-snapshot-controller Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container volume-snapshot-controller Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(c2d42366-14d4-4e0b-bcd7-a6055ffe56f2) Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container volume-snapshot-controller Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container volume-snapshot-controller Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container volume-snapshot-controller Jan 30 09:11:51.626: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 09:11:51.626 (54ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 09:11:51.626 Jan 30 09:11:51.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 09:11:51.68 (54ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 09:11:51.68 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 09:11:51.68 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 09:11:51.68 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 09:11:51.68 STEP: Collecting events from namespace "reboot-7046". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 09:11:51.68 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/30/23 09:11:51.75 Jan 30 09:11:51.791: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 09:11:51.791: INFO: Jan 30 09:11:51.834: INFO: Logging node info for node bootstrap-e2e-master Jan 30 09:11:51.876: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master a34af008-0528-47e4-a6c5-cd39d827847f 1307 0 2023-01-30 09:04:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-30 09:04:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:04:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 09:10:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:10:27 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:10:27 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:10:27 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:10:27 +0000 UTC,LastTransitionTime:2023-01-30 09:04:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.185.231.33,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:87a05ebeec11f95c366dec3ebfb54572,SystemUUID:87a05ebe-ec11-f95c-366d-ec3ebfb54572,BootID:b21fbdba-5e8a-4560-8e5c-0b3f13ec273b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:135961043,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:125279033,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:57551672,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:11:51.876: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 30 09:11:51.922: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 30 09:11:51.984: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container etcd-container ready: true, restart count 2 Jan 30 09:11:51.984: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 30 09:11:51.984: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container kube-controller-manager ready: true, restart count 4 Jan 30 09:11:51.984: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container kube-scheduler ready: true, restart count 3 Jan 30 09:11:51.984: INFO: metadata-proxy-v0.1-d2qbs started at 2023-01-30 09:04:49 +0000 UTC (0+2 container statuses recorded) Jan 30 09:11:51.984: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 09:11:51.984: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 09:11:51.984: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container etcd-container ready: true, restart count 1 Jan 30 09:11:51.984: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container kube-apiserver ready: true, restart count 0 Jan 30 09:11:51.984: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-30 09:03:44 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container kube-addon-manager ready: true, restart count 1 Jan 30 09:11:51.984: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-30 09:03:44 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:51.984: INFO: Container l7-lb-controller ready: false, restart count 4 Jan 30 09:11:52.160: INFO: Latency metrics for node bootstrap-e2e-master Jan 30 09:11:52.160: INFO: Logging node info for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:11:52.202: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7cr1 059c215f-20bf-4d2a-9d08-dd76e71cd121 1435 0 2023-01-30 09:04:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7cr1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:10:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:11:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 09:11:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-30 09:11:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-7cr1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:18 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:11:19 +0000 UTC,LastTransitionTime:2023-01-30 09:11:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.80.94,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7cr1.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7cr1.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:152f308b39d31a7b07927ba8747dc4e6,SystemUUID:152f308b-39d3-1a7b-0792-7ba8747dc4e6,BootID:041305a0-41ee-4de6-8a8c-5f412e8332bd,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:11:52.203: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:11:52.248: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7cr1 Jan 30 09:11:52.355: INFO: kube-proxy-bootstrap-e2e-minion-group-7cr1 started at 2023-01-30 09:04:13 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:52.355: INFO: Container kube-proxy ready: false, restart count 3 Jan 30 09:11:52.355: INFO: metadata-proxy-v0.1-f6lhm started at 2023-01-30 09:04:15 +0000 UTC (0+2 container statuses recorded) Jan 30 09:11:52.355: INFO: Container metadata-proxy ready: true, restart count 1 Jan 30 09:11:52.355: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 30 09:11:52.355: INFO: konnectivity-agent-b8sc4 started at 2023-01-30 09:04:29 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:52.355: INFO: Container konnectivity-agent ready: true, restart count 4 Jan 30 09:11:52.514: INFO: Latency metrics for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:11:52.514: INFO: Logging node info for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:52.556: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ctd3 1fc63985-9867-4666-aa14-c3224e06ef55 1567 0 2023-01-30 09:04:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ctd3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:10:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-30 09:11:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:11:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 09:11:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-ctd3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:11:46 +0000 UTC,LastTransitionTime:2023-01-30 09:11:45 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:47 +0000 UTC,LastTransitionTime:2023-01-30 09:11:46 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:47 +0000 UTC,LastTransitionTime:2023-01-30 09:11:46 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:47 +0000 UTC,LastTransitionTime:2023-01-30 09:11:46 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:11:47 +0000 UTC,LastTransitionTime:2023-01-30 09:11:47 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.47.9,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ctd3.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ctd3.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:589c645a71700ad5ee732b565ea0a6c2,SystemUUID:589c645a-7170-0ad5-ee73-2b565ea0a6c2,BootID:e2652c78-7ec4-47ff-a9f6-a32d90c22cce,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:11:52.556: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:52.602: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:52.665: INFO: metrics-server-v0.5.2-867b8754b9-q4757 started at 2023-01-30 09:04:40 +0000 UTC (0+2 container statuses recorded) Jan 30 09:11:52.665: INFO: Container metrics-server ready: false, restart count 2 Jan 30 09:11:52.665: INFO: Container metrics-server-nanny ready: false, restart count 2 Jan 30 09:11:52.665: INFO: kube-proxy-bootstrap-e2e-minion-group-ctd3 started at 2023-01-30 09:04:13 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:52.665: INFO: Container kube-proxy ready: true, restart count 4 Jan 30 09:11:52.665: INFO: metadata-proxy-v0.1-hb8pr started at 2023-01-30 09:04:13 +0000 UTC (0+2 container statuses recorded) Jan 30 09:11:52.665: INFO: Container metadata-proxy ready: true, restart count 1 Jan 30 09:11:52.665: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 30 09:11:52.665: INFO: konnectivity-agent-skfnx started at 2023-01-30 09:04:29 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:52.665: INFO: Container konnectivity-agent ready: true, restart count 3 Jan 30 09:11:58.252: INFO: Latency metrics for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:11:58.252: INFO: Logging node info for node bootstrap-e2e-minion-group-hx8v Jan 30 09:11:58.295: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-hx8v 2cf6f8aa-df64-4aca-ac1b-6cbf533da69a 1494 0 2023-01-30 09:04:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-hx8v kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:10:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:11:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-30 09:11:37 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-30 09:11:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-hx8v,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:15 +0000 UTC,LastTransitionTime:2023-01-30 09:04:15 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:37 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:37 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:37 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:11:37 +0000 UTC,LastTransitionTime:2023-01-30 09:11:37 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.2.148,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-hx8v.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-hx8v.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:be2afed7762cfdb54d3ec5133fceeff6,SystemUUID:be2afed7-762c-fdb5-4d3e-c5133fceeff6,BootID:652b1023-b7e9-4b95-a539-4c93e8cda554,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:11:58.296: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hx8v Jan 30 09:11:58.341: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hx8v Jan 30 09:11:58.407: INFO: l7-default-backend-8549d69d99-fq84f started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container default-http-backend ready: false, restart count 1 Jan 30 09:11:58.408: INFO: kube-dns-autoscaler-5f6455f985-xdrbh started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container autoscaler ready: true, restart count 1 Jan 30 09:11:58.408: INFO: volume-snapshot-controller-0 started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container volume-snapshot-controller ready: true, restart count 4 Jan 30 09:11:58.408: INFO: coredns-6846b5b5f-w57z6 started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container coredns ready: false, restart count 4 Jan 30 09:11:58.408: INFO: metadata-proxy-v0.1-ljgk8 started at 2023-01-30 09:04:10 +0000 UTC (0+2 container statuses recorded) Jan 30 09:11:58.408: INFO: Container metadata-proxy ready: true, restart count 1 Jan 30 09:11:58.408: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 30 09:11:58.408: INFO: konnectivity-agent-rj7fc started at 2023-01-30 09:04:16 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container konnectivity-agent ready: true, restart count 4 Jan 30 09:11:58.408: INFO: coredns-6846b5b5f-5d7s9 started at 2023-01-30 09:04:22 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container coredns ready: false, restart count 3 Jan 30 09:11:58.408: INFO: kube-proxy-bootstrap-e2e-minion-group-hx8v started at 2023-01-30 09:04:09 +0000 UTC (0+1 container statuses recorded) Jan 30 09:11:58.408: INFO: Container kube-proxy ready: true, restart count 4 Jan 30 09:12:22.910: INFO: Latency metrics for node bootstrap-e2e-minion-group-hx8v END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 09:12:22.91 (31.23s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 09:12:22.91 (31.23s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 09:12:22.91 STEP: Destroying namespace "reboot-7046" for this suite. - test/e2e/framework/framework.go:347 @ 01/30/23 09:12:22.91 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 09:12:22.957 (47ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 09:12:22.957 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 09:12:22.957 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 09:24:11.895from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 09:22:09.794 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 09:22:09.794 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 09:22:09.794 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 09:22:09.794 Jan 30 09:22:09.794: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 09:22:09.795 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 09:22:09.922 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 09:22:10.003 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 09:22:10.083 (290ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 09:22:10.083 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 09:22:10.084 (0s) > Enter [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/30/23 09:22:10.084 Jan 30 09:22:10.181: INFO: Getting bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:10.181: INFO: Getting bootstrap-e2e-minion-group-hx8v Jan 30 09:22:10.181: INFO: Getting bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:10.255: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be true Jan 30 09:22:10.255: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7cr1 condition Ready to be true Jan 30 09:22:10.257: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hx8v condition Ready to be true Jan 30 09:22:10.299: INFO: Node bootstrap-e2e-minion-group-7cr1 has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-f6lhm kube-proxy-bootstrap-e2e-minion-group-7cr1] Jan 30 09:22:10.299: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-f6lhm kube-proxy-bootstrap-e2e-minion-group-7cr1] Jan 30 09:22:10.299: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7cr1" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.299: INFO: Node bootstrap-e2e-minion-group-ctd3 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:22:10.299: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:22:10.299: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hb8pr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.299: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.299: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-f6lhm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.300: INFO: Node bootstrap-e2e-minion-group-hx8v has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:22:10.300: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:22:10.300: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.300: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-xdrbh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.300: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.300: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ljgk8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.348: INFO: Pod "metadata-proxy-v0.1-hb8pr": Phase="Running", Reason="", readiness=true. Elapsed: 49.421227ms Jan 30 09:22:10.348: INFO: Pod "metadata-proxy-v0.1-hb8pr" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.348: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh": Phase="Running", Reason="", readiness=true. Elapsed: 47.902874ms Jan 30 09:22:10.348: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.348: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7cr1": Phase="Running", Reason="", readiness=true. Elapsed: 49.623907ms Jan 30 09:22:10.348: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7cr1" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.348: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=true. Elapsed: 49.539642ms Jan 30 09:22:10.348: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.348: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:22:10.348: INFO: Getting external IP address for bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:10.348: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-ctd3(35.197.47.9:22) Jan 30 09:22:10.349: INFO: Pod "metadata-proxy-v0.1-f6lhm": Phase="Running", Reason="", readiness=true. Elapsed: 49.932948ms Jan 30 09:22:10.349: INFO: Pod "metadata-proxy-v0.1-f6lhm" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.349: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-f6lhm kube-proxy-bootstrap-e2e-minion-group-7cr1] Jan 30 09:22:10.349: INFO: Getting external IP address for bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:10.349: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-7cr1(34.82.80.94:22) Jan 30 09:22:10.349: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=true. Elapsed: 48.906406ms Jan 30 09:22:10.349: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.349: INFO: Pod "metadata-proxy-v0.1-ljgk8": Phase="Running", Reason="", readiness=true. Elapsed: 48.885795ms Jan 30 09:22:10.349: INFO: Pod "metadata-proxy-v0.1-ljgk8" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.349: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 49.202476ms Jan 30 09:22:10.349: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:10.863: INFO: ssh prow@35.197.47.9:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 30 09:22:10.863: INFO: ssh prow@35.197.47.9:22: stdout: "" Jan 30 09:22:10.863: INFO: ssh prow@35.197.47.9:22: stderr: "" Jan 30 09:22:10.863: INFO: ssh prow@35.197.47.9:22: exit code: 0 Jan 30 09:22:10.863: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be false Jan 30 09:22:10.871: INFO: ssh prow@34.82.80.94:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 30 09:22:10.871: INFO: ssh prow@34.82.80.94:22: stdout: "" Jan 30 09:22:10.871: INFO: ssh prow@34.82.80.94:22: stderr: "" Jan 30 09:22:10.871: INFO: ssh prow@34.82.80.94:22: exit code: 0 Jan 30 09:22:10.871: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-7cr1 condition Ready to be false Jan 30 09:22:10.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:10.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:12.400: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.099911514s Jan 30 09:22:12.400: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:12.950: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:12.962: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:14.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.091807768s Jan 30 09:22:14.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:14.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:15.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:16.391: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.091346698s Jan 30 09:22:16.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:17.040: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:17.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:18.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.092002308s Jan 30 09:22:18.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:19.084: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:19.099: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:20.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.09190808s Jan 30 09:22:20.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:21.126: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:21.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:22.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.092245339s Jan 30 09:22:22.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:23.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:23.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:24.393: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.093088034s Jan 30 09:22:24.393: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:25.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:25.231: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:26.391: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.091265166s Jan 30 09:22:26.391: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:27.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:27.274: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:28.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.091916398s Jan 30 09:22:28.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:29.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:29.318: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:30.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.091703389s Jan 30 09:22:30.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:31.346: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:31.360: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:32.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.091724284s Jan 30 09:22:32.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:33.386: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:33.401: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:34.389: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://35.185.231.33/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": dial tcp 35.185.231.33:443: connect: connection refused Jan 30 09:22:34.389: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 30 09:22:34.389: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:22:34.389: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 09:04:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 09:21:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 09:21:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 09:04:15 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.0.45 PodIPs:[{IP:10.64.0.45}] StartTime:2023-01-30 09:04:15 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 2m40s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(c2d42366-14d4-4e0b-bcd7-a6055ffe56f2),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-30 09:19:53 +0000 UTC,FinishedAt:2023-01-30 09:21:03 +0000 UTC,ContainerID:containerd://3b8cac8bf88bc5b80a6fbecade4d0f31abaae9c9142d6db038aecb309f6e7764,}} Ready:false RestartCount:10 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://3b8cac8bf88bc5b80a6fbecade4d0f31abaae9c9142d6db038aecb309f6e7764 Started:0xc0047536ff}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 30 09:22:34.429: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.185.231.33/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.185.231.33:443: connect: connection refused: Jan 30 09:22:34.429: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.185.231.33/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.185.231.33:443: connect: connection refused: Jan 30 09:22:35.426: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:35.442: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:37.466: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:37.482: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:39.506: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:39.522: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:41.546: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:41.562: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:43.586: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:43.602: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:45.626: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:45.642: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:47.668: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:47.682: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:49.708: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:49.722: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:51.749: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:51.762: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:53.789: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:53.802: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:55.830: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:55.842: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:57.869: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:57.882: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:59.909: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:59.921: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:23:01.950: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:23:01.961: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:23:03.989: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:23:04.002: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:23:10.589: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:10.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:12.631: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:12.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:14.675: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:14.679: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:16.719: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:16.721: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:18.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:18.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:20.804: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:20.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:22.847: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:22.851: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:24.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:24.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:26.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:26.937: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:28.979: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:28.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:31.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:31.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:33.065: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:33.067: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:35.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:35.111: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:37.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:37.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:39.204: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:39.204: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:41.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:41.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:43.296: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:43.296: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:45.343: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:45.343: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:47.389: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:47.389: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:49.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:49.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:51.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:51.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:53.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:53.528: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:55.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:55.572: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:57.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:57.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:59.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:59.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:01.709: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:01.709: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:03.755: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:03.755: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:05.802: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:05.802: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:07.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:07.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:09.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:09.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:11.895: INFO: Node bootstrap-e2e-minion-group-ctd3 didn't reach desired Ready condition status (false) within 2m0s Jan 30 09:24:11.895: INFO: Node bootstrap-e2e-minion-group-7cr1 didn't reach desired Ready condition status (false) within 2m0s Jan 30 09:24:11.895: INFO: Node bootstrap-e2e-minion-group-7cr1 failed reboot test. Jan 30 09:24:11.895: INFO: Node bootstrap-e2e-minion-group-ctd3 failed reboot test. Jan 30 09:24:11.895: INFO: Node bootstrap-e2e-minion-group-hx8v failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 09:24:11.895 < Exit [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/30/23 09:24:11.895 (2m1.812s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 09:24:11.895 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 09:24:11.895 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-5d7s9 to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.12:8181/ready": dial tcp 10.64.0.12:8181: connect: connection refused Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5d7s9_kube-system(7bd270c5-f2ec-4a85-9058-86135914ebab) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.15:8181/ready": dial tcp 10.64.0.15:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.15:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-5d7s9 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5d7s9_kube-system(7bd270c5-f2ec-4a85-9058-86135914ebab) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.31:8181/ready": dial tcp 10.64.0.31:8181: connect: connection refused Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-w57z6 to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.519322897s (3.519341369s including waiting) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.4:8181/ready": dial tcp 10.64.0.4:8181: connect: connection refused Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.10:8181/ready": dial tcp 10.64.0.10:8181: connect: connection refused Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-w57z6_kube-system(1e79e82a-e647-48da-a4fd-05ad6d505eef) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.14:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-w57z6 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.23:8181/ready": dial tcp 10.64.0.23:8181: connect: connection refused Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-w57z6_kube-system(1e79e82a-e647-48da-a4fd-05ad6d505eef) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.28:8181/ready": dial tcp 10.64.0.28:8181: connect: connection refused Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-w57z6 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-5d7s9 Jan 30 09:24:11.955: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 30 09:24:11.955: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 30 09:24:11.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 09:24:11.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 09:24:11.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 09:24:11.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 30 09:24:11.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 09:24:11.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_94038 became leader Jan 30 09:24:11.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_32bba became leader Jan 30 09:24:11.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b3e6 became leader Jan 30 09:24:11.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_8ed1e became leader Jan 30 09:24:11.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_e0e2a became leader Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-b8sc4 to bootstrap-e2e-minion-group-7cr1 Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 629.593814ms (629.614416ms including waiting) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.3:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Failed: Error: failed to get sandbox container task: no running task found: task 1d9c817ce846f529aa76391072c1a7fd56a9f47957fc17a2690b2671de27ff84 not found: not found Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b8sc4_kube-system(f6d868e6-1c3b-43a3-ad9d-01a41c072da7) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b8sc4_kube-system(f6d868e6-1c3b-43a3-ad9d-01a41c072da7) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b8sc4_kube-system(f6d868e6-1c3b-43a3-ad9d-01a41c072da7) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.8:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Failed: Error: failed to get sandbox container task: no running task found: task 66dcd5feab4c1724b438578787d177422e28292e6127e94d8530082238cd5f9d not found: not found Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rj7fc to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.815723669s (1.81573909s including waiting) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.7:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Failed: Error: failed to get sandbox container task: no running task found: task 11f3dcad8b3972dd50b4e21b10c349a64def00d0106a07d500fcf4637de4bd0d not found: not found Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.17:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rj7fc_kube-system(d1e6165b-b63d-4023-904f-a42ff691e8ae) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rj7fc_kube-system(d1e6165b-b63d-4023-904f-a42ff691e8ae) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.41:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-skfnx to bootstrap-e2e-minion-group-ctd3 Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 625.155725ms (625.171974ms including waiting) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rj7fc Jan 30 09:24:11.955: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-b8sc4 Jan 30 09:24:11.955: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-skfnx Jan 30 09:24:11.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 30 09:24:11.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 30 09:24:11.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 30 09:24:11.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 30 09:24:11.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 30 09:24:11.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 30 09:24:11.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 30 09:24:11.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 30 09:24:11.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 30 09:24:11.955: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:24:11.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 30 09:24:11.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 30 09:24:11.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(4fc5a5aeac3c203e3876adb08d878c93) Jan 30 09:24:11.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 30 09:24:11.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ce182e7b-00b7-4169-8624-f53196308681 became leader Jan 30 09:24:11.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_274fd4ca-797b-43c2-b1b6-f36d9e36c2e7 became leader Jan 30 09:24:11.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_4b65a89d-ba5e-49e4-8048-9ec50f56a58a became leader Jan 30 09:24:11.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c0831107-4ce4-4c38-b5d9-9a3dd92f107b became leader Jan 30 09:24:11.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a859a42b-ffd0-49bb-9be9-fb447a7be398 became leader Jan 30 09:24:11.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c030a8db-17a1-4d09-80f9-d039b630d51d became leader Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-xdrbh to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.161986755s (3.162044961s including waiting) Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-xdrbh Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-xdrbh_kube-system(e7e6cc3b-cfe7-4dd0-832f-ec18c94765b2) Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-xdrbh_kube-system(e7e6cc3b-cfe7-4dd0-832f-ec18c94765b2) Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-xdrbh Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 30 09:24:11.955: INFO: event for kube-dns: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7cr1_kube-system(dd1d9c1acf429448066a68f4147cfb77) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7cr1_kube-system(dd1d9c1acf429448066a68f4147cfb77) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7cr1_kube-system(dd1d9c1acf429448066a68f4147cfb77) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ctd3_kube-system(f92a9aed872df1bead32b1c0dd213385) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ctd3_kube-system(f92a9aed872df1bead32b1c0dd213385) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hx8v_kube-system(acb97e253f2500aa0581d024a2217293) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hx8v_kube-system(acb97e253f2500aa0581d024a2217293) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hx8v_kube-system(acb97e253f2500aa0581d024a2217293) Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(5b3c0a3dad3d723f9e5778ab0a62849c) Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_34628f7a-9073-4ee1-9bb3-51be47583fdb became leader Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_914f6c8b-8db8-44f8-a433-b4e094f84179 became leader Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_7f5e92cb-6a3a-45d0-be98-a7453645cadf became leader Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_2e191714-4d71-4061-b14a-06b3d43bf967 became leader Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_3d1c3010-6ad9-4e0d-a7ef-bb1aad2ccd0c became leader Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c51b6cd6-821c-4b76-bb1a-178d895540fe became leader Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4bab283c-48f6-476b-8a11-39987d9e7dc1 became leader Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-fq84f to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.812003002s (1.812012686s including waiting) Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container default-http-backend Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container default-http-backend Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-fq84f Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container default-http-backend Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container default-http-backend Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container default-http-backend Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container default-http-backend Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.36:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-fq84f Jan 30 09:24:11.955: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 30 09:24:11.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 30 09:24:11.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 30 09:24:11.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 30 09:24:11.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 30 09:24:11.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 30 09:24:11.955: INFO: event for metadata-proxy-v0.1-d2qbs: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-d2qbs to bootstrap-e2e-master Jan 30 09:24:11.955: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 767.072137ms (767.083529ms including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.898844974s (1.898853058s including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-f6lhm to bootstrap-e2e-minion-group-7cr1 Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 760.69768ms (760.732368ms including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.890232448s (1.890241652s including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-hb8pr to bootstrap-e2e-minion-group-ctd3 Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 790.982977ms (791.000841ms including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.062535103s (2.062546601s including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-ljgk8 to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 732.378395ms (732.411068ms including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.82877905s (1.828788865s including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-ljgk8 Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-d2qbs Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-hb8pr Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-f6lhm Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-v25xc to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.900251051s (3.900291297s including waiting) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.359025956s (3.35903606s including waiting) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-v25xc Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-v25xc Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-q4757 to bootstrap-e2e-minion-group-ctd3 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.428302629s (1.428313025s including waiting) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.023447114s (1.023460341s including waiting) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-q4757 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.7:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.9:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.9:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.9:10250/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-q4757 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-q4757 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.510528775s (3.510537487s including waiting) Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(c2d42366-14d4-4e0b-bcd7-a6055ffe56f2) Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(c2d42366-14d4-4e0b-bcd7-a6055ffe56f2) Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(c2d42366-14d4-4e0b-bcd7-a6055ffe56f2) Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 09:24:11.956 (61ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 09:24:11.956 Jan 30 09:24:11.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 09:24:12.002 (46ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 09:24:12.002 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 09:24:12.002 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 09:24:12.002 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 09:24:12.002 STEP: Collecting events from namespace "reboot-2584". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 09:24:12.002 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/30/23 09:24:12.043 Jan 30 09:24:12.085: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 09:24:12.085: INFO: Jan 30 09:24:12.129: INFO: Logging node info for node bootstrap-e2e-master Jan 30 09:24:12.172: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master a34af008-0528-47e4-a6c5-cd39d827847f 2496 0 2023-01-30 09:04:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-30 09:04:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:04:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 09:20:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:20:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:20:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:20:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:20:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.185.231.33,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:87a05ebeec11f95c366dec3ebfb54572,SystemUUID:87a05ebe-ec11-f95c-366d-ec3ebfb54572,BootID:b21fbdba-5e8a-4560-8e5c-0b3f13ec273b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:135961043,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:125279033,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:57551672,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:24:12.172: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 30 09:24:12.237: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 30 09:24:12.340: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container etcd-container ready: true, restart count 1 Jan 30 09:24:12.340: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 09:24:12.340: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-30 09:03:44 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 30 09:24:12.340: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-30 09:03:44 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container l7-lb-controller ready: true, restart count 7 Jan 30 09:24:12.340: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container etcd-container ready: true, restart count 4 Jan 30 09:24:12.340: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container konnectivity-server-container ready: true, restart count 3 Jan 30 09:24:12.340: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container kube-controller-manager ready: false, restart count 6 Jan 30 09:24:12.340: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container kube-scheduler ready: false, restart count 6 Jan 30 09:24:12.340: INFO: metadata-proxy-v0.1-d2qbs started at 2023-01-30 09:04:49 +0000 UTC (0+2 container statuses recorded) Jan 30 09:24:12.340: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 09:24:12.340: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 09:24:12.523: INFO: Latency metrics for node bootstrap-e2e-master Jan 30 09:24:12.523: INFO: Logging node info for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:24:12.565: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7cr1 059c215f-20bf-4d2a-9d08-dd76e71cd121 2658 0 2023-01-30 09:04:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7cr1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:17:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 09:19:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-30 09:22:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-30 09:23:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-7cr1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:52 +0000 UTC,LastTransitionTime:2023-01-30 09:19:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:52 +0000 UTC,LastTransitionTime:2023-01-30 09:19:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:52 +0000 UTC,LastTransitionTime:2023-01-30 09:19:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:19:52 +0000 UTC,LastTransitionTime:2023-01-30 09:19:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.80.94,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7cr1.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7cr1.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:152f308b39d31a7b07927ba8747dc4e6,SystemUUID:152f308b-39d3-1a7b-0792-7ba8747dc4e6,BootID:d9763507-0052-4a05-9205-8ee7c75cec27,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:24:12.566: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:24:12.613: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7cr1 Jan 30 09:24:12.660: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-7cr1: error trying to reach service: dial tcp 10.138.0.3:10250: connect: connection refused Jan 30 09:24:12.660: INFO: Logging node info for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:24:12.703: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ctd3 1fc63985-9867-4666-aa14-c3224e06ef55 2655 0 2023-01-30 09:04:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ctd3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:17:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 09:19:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-30 09:22:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-30 09:23:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-ctd3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:54 +0000 UTC,LastTransitionTime:2023-01-30 09:19:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:54 +0000 UTC,LastTransitionTime:2023-01-30 09:19:54 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:54 +0000 UTC,LastTransitionTime:2023-01-30 09:19:54 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:19:54 +0000 UTC,LastTransitionTime:2023-01-30 09:19:54 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.47.9,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-ctd3.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-ctd3.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:589c645a71700ad5ee732b565ea0a6c2,SystemUUID:589c645a-7170-0ad5-ee73-2b565ea0a6c2,BootID:75343662-caca-4346-b26c-dbb44fc7524c,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:24:12.703: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:24:12.751: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-ctd3 Jan 30 09:24:12.808: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-ctd3: error trying to reach service: dial tcp 10.138.0.5:10250: connect: connection refused Jan 30 09:24:12.808: INFO: Logging node info for node bootstrap-e2e-minion-group-hx8v Jan 30 09:24:12.852: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-hx8v 2cf6f8aa-df64-4aca-ac1b-6cbf533da69a 2570 0 2023-01-30 09:04:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-hx8v kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:17:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kube-controller-manager Update v1 2023-01-30 09:17:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-30 09:19:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-30 09:19:57 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-hx8v,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:19:26 +0000 UTC,LastTransitionTime:2023-01-30 09:13:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:19:26 +0000 UTC,LastTransitionTime:2023-01-30 09:13:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:19:26 +0000 UTC,LastTransitionTime:2023-01-30 09:13:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:19:26 +0000 UTC,LastTransitionTime:2023-01-30 09:13:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:19:26 +0000 UTC,LastTransitionTime:2023-01-30 09:13:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:19:26 +0000 UTC,LastTransitionTime:2023-01-30 09:13:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:19:26 +0000 UTC,LastTransitionTime:2023-01-30 09:13:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:15 +0000 UTC,LastTransitionTime:2023-01-30 09:04:15 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:57 +0000 UTC,LastTransitionTime:2023-01-30 09:19:57 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:57 +0000 UTC,LastTransitionTime:2023-01-30 09:19:57 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:57 +0000 UTC,LastTransitionTime:2023-01-30 09:19:57 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:19:57 +0000 UTC,LastTransitionTime:2023-01-30 09:19:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.127.2.148,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-hx8v.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-hx8v.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:be2afed7762cfdb54d3ec5133fceeff6,SystemUUID:be2afed7-762c-fdb5-4d3e-c5133fceeff6,BootID:bad30064-56fb-4a8f-b614-3606a4bc58d3,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:24:12.852: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-hx8v Jan 30 09:24:12.902: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-hx8v Jan 30 09:24:12.982: INFO: l7-default-backend-8549d69d99-fq84f started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.982: INFO: Container default-http-backend ready: true, restart count 4 Jan 30 09:24:12.982: INFO: volume-snapshot-controller-0 started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.982: INFO: Container volume-snapshot-controller ready: true, restart count 11 Jan 30 09:24:12.982: INFO: kube-dns-autoscaler-5f6455f985-xdrbh started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.982: INFO: Container autoscaler ready: true, restart count 5 Jan 30 09:24:12.982: INFO: coredns-6846b5b5f-w57z6 started at 2023-01-30 09:04:15 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.982: INFO: Container coredns ready: false, restart count 6 Jan 30 09:24:12.982: INFO: metadata-proxy-v0.1-ljgk8 started at 2023-01-30 09:04:10 +0000 UTC (0+2 container statuses recorded) Jan 30 09:24:12.982: INFO: Container metadata-proxy ready: true, restart count 2 Jan 30 09:24:12.982: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 30 09:24:12.982: INFO: konnectivity-agent-rj7fc started at 2023-01-30 09:04:16 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.982: INFO: Container konnectivity-agent ready: true, restart count 9 Jan 30 09:24:12.982: INFO: coredns-6846b5b5f-5d7s9 started at 2023-01-30 09:04:22 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.982: INFO: Container coredns ready: false, restart count 5 Jan 30 09:24:12.982: INFO: kube-proxy-bootstrap-e2e-minion-group-hx8v started at 2023-01-30 09:04:09 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.982: INFO: Container kube-proxy ready: false, restart count 7 Jan 30 09:24:13.166: INFO: Latency metrics for node bootstrap-e2e-minion-group-hx8v END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 09:24:13.166 (1.163s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 09:24:13.166 (1.163s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 09:24:13.166 STEP: Destroying namespace "reboot-2584" for this suite. - test/e2e/framework/framework.go:347 @ 01/30/23 09:24:13.166 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/30/23 09:24:13.21 (45ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 09:24:13.21 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/30/23 09:24:13.21 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 09:24:11.895
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 09:22:09.794 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/30/23 09:22:09.794 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 09:22:09.794 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/30/23 09:22:09.794 Jan 30 09:22:09.794: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/30/23 09:22:09.795 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/30/23 09:22:09.922 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/30/23 09:22:10.003 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/30/23 09:22:10.083 (290ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 09:22:10.083 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/30/23 09:22:10.084 (0s) > Enter [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/30/23 09:22:10.084 Jan 30 09:22:10.181: INFO: Getting bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:10.181: INFO: Getting bootstrap-e2e-minion-group-hx8v Jan 30 09:22:10.181: INFO: Getting bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:10.255: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be true Jan 30 09:22:10.255: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-7cr1 condition Ready to be true Jan 30 09:22:10.257: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-hx8v condition Ready to be true Jan 30 09:22:10.299: INFO: Node bootstrap-e2e-minion-group-7cr1 has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-f6lhm kube-proxy-bootstrap-e2e-minion-group-7cr1] Jan 30 09:22:10.299: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-f6lhm kube-proxy-bootstrap-e2e-minion-group-7cr1] Jan 30 09:22:10.299: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-7cr1" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.299: INFO: Node bootstrap-e2e-minion-group-ctd3 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:22:10.299: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:22:10.299: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-hb8pr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.299: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.299: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-f6lhm" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.300: INFO: Node bootstrap-e2e-minion-group-hx8v has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:22:10.300: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:22:10.300: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.300: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-xdrbh" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.300: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.300: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-ljgk8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 30 09:22:10.348: INFO: Pod "metadata-proxy-v0.1-hb8pr": Phase="Running", Reason="", readiness=true. Elapsed: 49.421227ms Jan 30 09:22:10.348: INFO: Pod "metadata-proxy-v0.1-hb8pr" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.348: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh": Phase="Running", Reason="", readiness=true. Elapsed: 47.902874ms Jan 30 09:22:10.348: INFO: Pod "kube-dns-autoscaler-5f6455f985-xdrbh" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.348: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7cr1": Phase="Running", Reason="", readiness=true. Elapsed: 49.623907ms Jan 30 09:22:10.348: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-7cr1" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.348: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3": Phase="Running", Reason="", readiness=true. Elapsed: 49.539642ms Jan 30 09:22:10.348: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-ctd3" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.348: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-ctd3 metadata-proxy-v0.1-hb8pr] Jan 30 09:22:10.348: INFO: Getting external IP address for bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:10.348: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-ctd3(35.197.47.9:22) Jan 30 09:22:10.349: INFO: Pod "metadata-proxy-v0.1-f6lhm": Phase="Running", Reason="", readiness=true. Elapsed: 49.932948ms Jan 30 09:22:10.349: INFO: Pod "metadata-proxy-v0.1-f6lhm" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.349: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-f6lhm kube-proxy-bootstrap-e2e-minion-group-7cr1] Jan 30 09:22:10.349: INFO: Getting external IP address for bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:10.349: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-7cr1(34.82.80.94:22) Jan 30 09:22:10.349: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v": Phase="Running", Reason="", readiness=true. Elapsed: 48.906406ms Jan 30 09:22:10.349: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-hx8v" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.349: INFO: Pod "metadata-proxy-v0.1-ljgk8": Phase="Running", Reason="", readiness=true. Elapsed: 48.885795ms Jan 30 09:22:10.349: INFO: Pod "metadata-proxy-v0.1-ljgk8" satisfied condition "running and ready, or succeeded" Jan 30 09:22:10.349: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 49.202476ms Jan 30 09:22:10.349: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:10.863: INFO: ssh prow@35.197.47.9:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 30 09:22:10.863: INFO: ssh prow@35.197.47.9:22: stdout: "" Jan 30 09:22:10.863: INFO: ssh prow@35.197.47.9:22: stderr: "" Jan 30 09:22:10.863: INFO: ssh prow@35.197.47.9:22: exit code: 0 Jan 30 09:22:10.863: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-ctd3 condition Ready to be false Jan 30 09:22:10.871: INFO: ssh prow@34.82.80.94:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo b | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 30 09:22:10.871: INFO: ssh prow@34.82.80.94:22: stdout: "" Jan 30 09:22:10.871: INFO: ssh prow@34.82.80.94:22: stderr: "" Jan 30 09:22:10.871: INFO: ssh prow@34.82.80.94:22: exit code: 0 Jan 30 09:22:10.871: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-7cr1 condition Ready to be false Jan 30 09:22:10.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:10.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:12.400: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.099911514s Jan 30 09:22:12.400: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:12.950: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:12.962: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:14.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.091807768s Jan 30 09:22:14.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:14.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:15.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:16.391: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.091346698s Jan 30 09:22:16.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:17.040: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:17.056: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:18.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.092002308s Jan 30 09:22:18.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:19.084: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:19.099: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:20.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.09190808s Jan 30 09:22:20.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:21.126: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:21.142: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:22.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.092245339s Jan 30 09:22:22.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:23.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:23.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:24.393: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.093088034s Jan 30 09:22:24.393: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:25.215: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:25.231: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:26.391: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.091265166s Jan 30 09:22:26.391: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:27.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:27.274: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:28.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.091916398s Jan 30 09:22:28.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:29.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:29.318: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:30.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.091703389s Jan 30 09:22:30.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:31.346: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:31.360: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:22:32.392: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.091724284s Jan 30 09:22:32.392: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-hx8v' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:21:03 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-30 09:04:15 +0000 UTC }] Jan 30 09:22:33.386: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:33.401: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:34.389: INFO: Encountered non-retryable error while getting pod kube-system/volume-snapshot-controller-0: Get "https://35.185.231.33/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0": dial tcp 35.185.231.33:443: connect: connection refused Jan 30 09:22:34.389: INFO: Pod volume-snapshot-controller-0 failed to be running and ready, or succeeded. Jan 30 09:22:34.389: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: false. Pods: [kube-dns-autoscaler-5f6455f985-xdrbh kube-proxy-bootstrap-e2e-minion-group-hx8v metadata-proxy-v0.1-ljgk8 volume-snapshot-controller-0] Jan 30 09:22:34.389: INFO: Status for not ready pod kube-system/volume-snapshot-controller-0: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 09:04:15 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 09:21:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 09:21:03 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [volume-snapshot-controller]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-30 09:04:15 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.4 PodIP:10.64.0.45 PodIPs:[{IP:10.64.0.45}] StartTime:2023-01-30 09:04:15 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:volume-snapshot-controller State:{Waiting:&ContainerStateWaiting{Reason:CrashLoopBackOff,Message:back-off 2m40s restarting failed container=volume-snapshot-controller pod=volume-snapshot-controller-0_kube-system(c2d42366-14d4-4e0b-bcd7-a6055ffe56f2),} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-30 09:19:53 +0000 UTC,FinishedAt:2023-01-30 09:21:03 +0000 UTC,ContainerID:containerd://3b8cac8bf88bc5b80a6fbecade4d0f31abaae9c9142d6db038aecb309f6e7764,}} Ready:false RestartCount:10 Image:registry.k8s.io/sig-storage/snapshot-controller:v6.1.0 ImageID:registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 ContainerID:containerd://3b8cac8bf88bc5b80a6fbecade4d0f31abaae9c9142d6db038aecb309f6e7764 Started:0xc0047536ff}] QOSClass:BestEffort EphemeralContainerStatuses:[]} Jan 30 09:22:34.429: INFO: Retrieving log for container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.185.231.33/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.185.231.33:443: connect: connection refused: Jan 30 09:22:34.429: INFO: Retrieving log for the last terminated container kube-system/volume-snapshot-controller-0/volume-snapshot-controller, err: Get "https://35.185.231.33/api/v1/namespaces/kube-system/pods/volume-snapshot-controller-0/log?container=volume-snapshot-controller&previous=false": dial tcp 35.185.231.33:443: connect: connection refused: Jan 30 09:22:35.426: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:35.442: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:37.466: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:37.482: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:39.506: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:39.522: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:41.546: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:41.562: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:43.586: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:43.602: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:45.626: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:45.642: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:47.668: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:47.682: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:49.708: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:49.722: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:51.749: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:51.762: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:53.789: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:53.802: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:55.830: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:55.842: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:57.869: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:57.882: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:22:59.909: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:22:59.921: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:23:01.950: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:23:01.961: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:23:03.989: INFO: Couldn't get node bootstrap-e2e-minion-group-ctd3 Jan 30 09:23:04.002: INFO: Couldn't get node bootstrap-e2e-minion-group-7cr1 Jan 30 09:23:10.589: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:10.592: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:12.631: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:12.635: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:14.675: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:14.679: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:16.719: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:16.721: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:18.761: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:18.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:20.804: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:20.807: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:22.847: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:22.851: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:24.891: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:24.894: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:26.935: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:26.937: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:28.979: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:28.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:31.021: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:31.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:33.065: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:33.067: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:35.109: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:35.111: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:37.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:37.155: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:39.204: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:39.204: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:41.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:41.251: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:43.296: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:43.296: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:45.343: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:45.343: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:47.389: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:47.389: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:49.440: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:49.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:51.483: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:51.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:53.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:53.528: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:55.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:55.572: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:57.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:57.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:59.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:23:59.664: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:01.709: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:01.709: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:03.755: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:03.755: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:05.802: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:05.802: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:07.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:07.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:09.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-ctd3 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:09.895: INFO: Condition Ready of node bootstrap-e2e-minion-group-7cr1 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 30 09:24:11.895: INFO: Node bootstrap-e2e-minion-group-ctd3 didn't reach desired Ready condition status (false) within 2m0s Jan 30 09:24:11.895: INFO: Node bootstrap-e2e-minion-group-7cr1 didn't reach desired Ready condition status (false) within 2m0s Jan 30 09:24:11.895: INFO: Node bootstrap-e2e-minion-group-7cr1 failed reboot test. Jan 30 09:24:11.895: INFO: Node bootstrap-e2e-minion-group-ctd3 failed reboot test. Jan 30 09:24:11.895: INFO: Node bootstrap-e2e-minion-group-hx8v failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/30/23 09:24:11.895 < Exit [It] each node by ordering unclean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:103 @ 01/30/23 09:24:11.895 (2m1.812s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 09:24:11.895 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/30/23 09:24:11.895 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-5d7s9 to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.12:8181/ready": dial tcp 10.64.0.12:8181: connect: connection refused Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5d7s9_kube-system(7bd270c5-f2ec-4a85-9058-86135914ebab) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.15:8181/ready": dial tcp 10.64.0.15:8181: i/o timeout (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.15:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-5d7s9 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-5d7s9_kube-system(7bd270c5-f2ec-4a85-9058-86135914ebab) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.31:8181/ready": dial tcp 10.64.0.31:8181: connect: connection refused Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-5d7s9: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-w57z6 to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 3.519322897s (3.519341369s including waiting) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.4:8181/ready": dial tcp 10.64.0.4:8181: connect: connection refused Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.10:8181/ready": dial tcp 10.64.0.10:8181: connect: connection refused Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-w57z6_kube-system(1e79e82a-e647-48da-a4fd-05ad6d505eef) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.14:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-w57z6 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.23:8181/ready": dial tcp 10.64.0.23:8181: connect: connection refused Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container coredns Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-w57z6_kube-system(1e79e82a-e647-48da-a4fd-05ad6d505eef) Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: Get "http://10.64.0.28:8181/ready": dial tcp 10.64.0.28:8181: connect: connection refused Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f-w57z6: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-w57z6 Jan 30 09:24:11.955: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-5d7s9 Jan 30 09:24:11.955: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 30 09:24:11.955: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 09:24:11.955: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 30 09:24:11.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 30 09:24:11.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 30 09:24:11.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 30 09:24:11.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Jan 30 09:24:11.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 30 09:24:11.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_94038 became leader Jan 30 09:24:11.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_32bba became leader Jan 30 09:24:11.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_b3e6 became leader Jan 30 09:24:11.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_8ed1e became leader Jan 30 09:24:11.955: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_e0e2a became leader Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-b8sc4 to bootstrap-e2e-minion-group-7cr1 Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 629.593814ms (629.614416ms including waiting) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.3:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Failed: Error: failed to get sandbox container task: no running task found: task 1d9c817ce846f529aa76391072c1a7fd56a9f47957fc17a2690b2671de27ff84 not found: not found Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b8sc4_kube-system(f6d868e6-1c3b-43a3-ad9d-01a41c072da7) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b8sc4_kube-system(f6d868e6-1c3b-43a3-ad9d-01a41c072da7) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-b8sc4_kube-system(f6d868e6-1c3b-43a3-ad9d-01a41c072da7) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Unhealthy: Liveness probe failed: Get "http://10.64.3.8:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for konnectivity-agent-b8sc4: {kubelet bootstrap-e2e-minion-group-7cr1} Failed: Error: failed to get sandbox container task: no running task found: task 66dcd5feab4c1724b438578787d177422e28292e6127e94d8530082238cd5f9d not found: not found Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-rj7fc to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 1.815723669s (1.81573909s including waiting) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.7:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Failed: Error: failed to get sandbox container task: no running task found: task 11f3dcad8b3972dd50b4e21b10c349a64def00d0106a07d500fcf4637de4bd0d not found: not found Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.17:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rj7fc_kube-system(d1e6165b-b63d-4023-904f-a42ff691e8ae) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-rj7fc_kube-system(d1e6165b-b63d-4023-904f-a42ff691e8ae) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.41:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for konnectivity-agent-rj7fc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-skfnx to bootstrap-e2e-minion-group-ctd3 Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 625.155725ms (625.171974ms including waiting) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container konnectivity-agent Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "http://10.64.2.8:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for konnectivity-agent-skfnx: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-rj7fc Jan 30 09:24:11.955: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-b8sc4 Jan 30 09:24:11.955: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-skfnx Jan 30 09:24:11.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 30 09:24:11.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 30 09:24:11.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 30 09:24:11.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 30 09:24:11.955: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 30 09:24:11.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 30 09:24:11.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 30 09:24:11.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 30 09:24:11.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 30 09:24:11.955: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 30 09:24:11.955: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:24:11.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 30 09:24:11.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 30 09:24:11.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(4fc5a5aeac3c203e3876adb08d878c93) Jan 30 09:24:11.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 30 09:24:11.955: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_ce182e7b-00b7-4169-8624-f53196308681 became leader Jan 30 09:24:11.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_274fd4ca-797b-43c2-b1b6-f36d9e36c2e7 became leader Jan 30 09:24:11.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_4b65a89d-ba5e-49e4-8048-9ec50f56a58a became leader Jan 30 09:24:11.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c0831107-4ce4-4c38-b5d9-9a3dd92f107b became leader Jan 30 09:24:11.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a859a42b-ffd0-49bb-9be9-fb447a7be398 became leader Jan 30 09:24:11.955: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_c030a8db-17a1-4d09-80f9-d039b630d51d became leader Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-xdrbh to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 3.161986755s (3.162044961s including waiting) Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-xdrbh Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-xdrbh_kube-system(e7e6cc3b-cfe7-4dd0-832f-ec18c94765b2) Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container autoscaler Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-xdrbh_kube-system(e7e6cc3b-cfe7-4dd0-832f-ec18c94765b2) Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985-xdrbh: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-xdrbh Jan 30 09:24:11.955: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 30 09:24:11.955: INFO: event for kube-dns: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7cr1_kube-system(dd1d9c1acf429448066a68f4147cfb77) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7cr1_kube-system(dd1d9c1acf429448066a68f4147cfb77) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {kubelet bootstrap-e2e-minion-group-7cr1} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-7cr1_kube-system(dd1d9c1acf429448066a68f4147cfb77) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-7cr1: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ctd3_kube-system(f92a9aed872df1bead32b1c0dd213385) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {kubelet bootstrap-e2e-minion-group-ctd3} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-ctd3_kube-system(f92a9aed872df1bead32b1c0dd213385) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-ctd3: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hx8v_kube-system(acb97e253f2500aa0581d024a2217293) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hx8v_kube-system(acb97e253f2500aa0581d024a2217293) Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container kube-proxy Jan 30 09:24:11.955: INFO: event for kube-proxy-bootstrap-e2e-minion-group-hx8v: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-hx8v_kube-system(acb97e253f2500aa0581d024a2217293) Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4" already present on machine Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(5b3c0a3dad3d723f9e5778ab0a62849c) Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_34628f7a-9073-4ee1-9bb3-51be47583fdb became leader Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_914f6c8b-8db8-44f8-a433-b4e094f84179 became leader Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_7f5e92cb-6a3a-45d0-be98-a7453645cadf became leader Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_2e191714-4d71-4061-b14a-06b3d43bf967 became leader Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_3d1c3010-6ad9-4e0d-a7ef-bb1aad2ccd0c became leader Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_c51b6cd6-821c-4b76-bb1a-178d895540fe became leader Jan 30 09:24:11.955: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4bab283c-48f6-476b-8a11-39987d9e7dc1 became leader Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-fq84f to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.812003002s (1.812012686s including waiting) Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container default-http-backend Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container default-http-backend Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.6:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-fq84f Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container default-http-backend Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container default-http-backend Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container default-http-backend Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container default-http-backend Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Liveness probe failed: Get "http://10.64.0.36:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99-fq84f: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.955: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-fq84f Jan 30 09:24:11.955: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 30 09:24:11.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 30 09:24:11.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 30 09:24:11.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 30 09:24:11.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 30 09:24:11.955: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 30 09:24:11.955: INFO: event for metadata-proxy-v0.1-d2qbs: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-d2qbs to bootstrap-e2e-master Jan 30 09:24:11.955: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 767.072137ms (767.083529ms including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.898844974s (1.898853058s including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-d2qbs: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-f6lhm to bootstrap-e2e-minion-group-7cr1 Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 760.69768ms (760.732368ms including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.890232448s (1.890241652s including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {kubelet bootstrap-e2e-minion-group-7cr1} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-f6lhm: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-hb8pr to bootstrap-e2e-minion-group-ctd3 Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 790.982977ms (791.000841ms including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.062535103s (2.062546601s including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {kubelet bootstrap-e2e-minion-group-ctd3} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-hb8pr: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-ljgk8 to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 732.378395ms (732.411068ms including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.82877905s (1.828788865s including waiting) Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metadata-proxy Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container prometheus-to-sd-exporter Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {kubelet bootstrap-e2e-minion-group-hx8v} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1-ljgk8: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-ljgk8 Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-d2qbs Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-hb8pr Jan 30 09:24:11.956: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-f6lhm Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-v25xc to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.900251051s (3.900291297s including waiting) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.359025956s (3.35903606s including waiting) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c-v25xc: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-v25xc Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-v25xc Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-q4757 to bootstrap-e2e-minion-group-ctd3 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.428302629s (1.428313025s including waiting) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.023447114s (1.023460341s including waiting) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": dial tcp 10.64.2.3:10250: connect: connection refused Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": dial tcp 10.64.2.3:10250: connect: connection refused Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.3:10250/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Killing: Stopping container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-q4757 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.7:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.7:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Created: Created container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Started: Started container metrics-server-nanny Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.9:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Liveness probe failed: Get "https://10.64.2.9:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: Get "https://10.64.2.9:10250/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {kubelet bootstrap-e2e-minion-group-ctd3} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9-q4757: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-q4757 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-q4757 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 30 09:24:11.956: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-hx8v Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 3.510528775s (3.510537487s including waiting) Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(c2d42366-14d4-4e0b-bcd7-a6055ffe56f2) Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(c2d42366-14d4-4e0b-bcd7-a6055ffe56f2) Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Created: Created container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Started: Started container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} Killing: Stopping container volume-snapshot-controller Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-hx8v} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(c2d42366-14d4-4e0b-bcd7-a6055ffe56f2) Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 30 09:24:11.956: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/30/23 09:24:11.956 (61ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 09:24:11.956 Jan 30 09:24:11.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/30/23 09:24:12.002 (46ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 09:24:12.002 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/30/23 09:24:12.002 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/30/23 09:24:12.002 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/30/23 09:24:12.002 STEP: Collecting events from namespace "reboot-2584". - test/e2e/framework/debug/dump.go:42 @ 01/30/23 09:24:12.002 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/30/23 09:24:12.043 Jan 30 09:24:12.085: INFO: POD NODE PHASE GRACE CONDITIONS Jan 30 09:24:12.085: INFO: Jan 30 09:24:12.129: INFO: Logging node info for node bootstrap-e2e-master Jan 30 09:24:12.172: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master a34af008-0528-47e4-a6c5-cd39d827847f 2496 0 2023-01-30 09:04:11 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-30 09:04:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-30 09:04:35 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-30 09:20:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:20:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:20:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:20:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:20:40 +0000 UTC,LastTransitionTime:2023-01-30 09:04:32 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:35.185.231.33,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:87a05ebeec11f95c366dec3ebfb54572,SystemUUID:87a05ebe-ec11-f95c-366d-ec3ebfb54572,BootID:b21fbdba-5e8a-4560-8e5c-0b3f13ec273b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:135961043,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:125279033,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:57551672,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:24:12.172: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 30 09:24:12.237: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 30 09:24:12.340: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container etcd-container ready: true, restart count 1 Jan 30 09:24:12.340: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container kube-apiserver ready: true, restart count 1 Jan 30 09:24:12.340: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-30 09:03:44 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container kube-addon-manager ready: true, restart count 3 Jan 30 09:24:12.340: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-30 09:03:44 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container l7-lb-controller ready: true, restart count 7 Jan 30 09:24:12.340: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container etcd-container ready: true, restart count 4 Jan 30 09:24:12.340: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container konnectivity-server-container ready: true, restart count 3 Jan 30 09:24:12.340: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container kube-controller-manager ready: false, restart count 6 Jan 30 09:24:12.340: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-30 09:03:26 +0000 UTC (0+1 container statuses recorded) Jan 30 09:24:12.340: INFO: Container kube-scheduler ready: false, restart count 6 Jan 30 09:24:12.340: INFO: metadata-proxy-v0.1-d2qbs started at 2023-01-30 09:04:49 +0000 UTC (0+2 container statuses recorded) Jan 30 09:24:12.340: INFO: Container metadata-proxy ready: true, restart count 0 Jan 30 09:24:12.340: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 30 09:24:12.523: INFO: Latency metrics for node bootstrap-e2e-master Jan 30 09:24:12.523: INFO: Logging node info for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:24:12.565: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-7cr1 059c215f-20bf-4d2a-9d08-dd76e71cd121 2658 0 2023-01-30 09:04:13 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-7cr1 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:17:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 09:19:52 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-30 09:22:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-30 09:23:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-7cr1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:23:35 +0000 UTC,LastTransitionTime:2023-01-30 09:23:34 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:52 +0000 UTC,LastTransitionTime:2023-01-30 09:19:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:52 +0000 UTC,LastTransitionTime:2023-01-30 09:19:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:52 +0000 UTC,LastTransitionTime:2023-01-30 09:19:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-30 09:19:52 +0000 UTC,LastTransitionTime:2023-01-30 09:19:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.82.80.94,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-7cr1.c.k8s-jkns-gci-gce-slow-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-7cr1.c.k8s-jkns-gci-gce-slow-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:152f308b39d31a7b07927ba8747dc4e6,SystemUUID:152f308b-39d3-1a7b-0792-7ba8747dc4e6,BootID:d9763507-0052-4a05-9205-8ee7c75cec27,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-17-g3695f29c3,KubeletVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,KubeProxyVersion:v1.27.0-alpha.1.88+7b243cef1a81f4,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.88_7b243cef1a81f4],SizeBytes:66988744,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 30 09:24:12.566: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-7cr1 Jan 30 09:24:12.613: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-7cr1 Jan 30 09:24:12.660: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-7cr1: error trying to reach service: dial tcp 10.138.0.3:10250: connect: connection refused Jan 30 09:24:12.660: INFO: Logging node info for node bootstrap-e2e-minion-group-ctd3 Jan 30 09:24:12.703: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-ctd3 1fc63985-9867-4666-aa14-c3224e06ef55 2655 0 2023-01-30 09:04:12 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-ctd3 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-30 09:04:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-30 09:17:29 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-30 09:19:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-30 09:22:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2023-01-30 09:23:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-gci-gce-slow-1-2/us-west1-b/bootstrap-e2e-minion-group-ctd3,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-30 09:23:34 +0000 UTC,LastTransitionTime:2023-01-30 09:23:33 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-30 09:04:29 +0000 UTC,LastTransitionTime:2023-01-30 09:04:29 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:54 +0000 UTC,LastTransitionTime:2023-01-30 09:19:54 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-30 09:19:54 +0000 UTC,LastTransitionTime:2023-01-30 09:19:54