go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 22:11:48.693from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 22:09:34.792 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 22:09:34.792 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 22:09:34.792 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 22:09:34.792 Jan 29 22:09:34.792: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 22:09:34.793 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 22:09:34.943 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 22:09:35.024 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 22:09:35.106 (314ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 22:09:35.106 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 22:09:35.106 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 22:09:35.106 Jan 29 22:09:35.202: INFO: Getting bootstrap-e2e-minion-group-0h23 Jan 29 22:09:35.252: INFO: Getting bootstrap-e2e-minion-group-prl8 Jan 29 22:09:35.252: INFO: Getting bootstrap-e2e-minion-group-qp90 Jan 29 22:09:35.275: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-0h23 condition Ready to be true Jan 29 22:09:35.295: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-qp90 condition Ready to be true Jan 29 22:09:35.295: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-prl8 condition Ready to be true Jan 29 22:09:35.319: INFO: Node bootstrap-e2e-minion-group-0h23 has 4 assigned pods with no liveness probes: [metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23] Jan 29 22:09:35.319: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23] Jan 29 22:09:35.319: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-0h23" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.319: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.319: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-8w5rj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.319: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-7h8xr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.336: INFO: Node bootstrap-e2e-minion-group-qp90 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:09:35.336: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:09:35.336: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-n78nd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.336: INFO: Node bootstrap-e2e-minion-group-prl8 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:09:35.336: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:09:35.336: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-gjgkr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.337: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-prl8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.337: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qp90" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.362: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0h23": Phase="Running", Reason="", readiness=true. Elapsed: 42.53501ms Jan 29 22:09:35.362: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0h23" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.363: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 43.332106ms Jan 29 22:09:35.363: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.364: INFO: Pod "kube-dns-autoscaler-5f6455f985-8w5rj": Phase="Running", Reason="", readiness=true. Elapsed: 44.551507ms Jan 29 22:09:35.364: INFO: Pod "kube-dns-autoscaler-5f6455f985-8w5rj" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.364: INFO: Pod "metadata-proxy-v0.1-7h8xr": Phase="Running", Reason="", readiness=true. Elapsed: 44.54157ms Jan 29 22:09:35.364: INFO: Pod "metadata-proxy-v0.1-7h8xr" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.364: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23] Jan 29 22:09:35.364: INFO: Getting external IP address for bootstrap-e2e-minion-group-0h23 Jan 29 22:09:35.364: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-0h23(35.247.69.167:22) Jan 29 22:09:35.381: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8": Phase="Running", Reason="", readiness=true. Elapsed: 44.890331ms Jan 29 22:09:35.381: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.381: INFO: Pod "metadata-proxy-v0.1-gjgkr": Phase="Running", Reason="", readiness=true. Elapsed: 45.03804ms Jan 29 22:09:35.381: INFO: Pod "metadata-proxy-v0.1-gjgkr" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.382: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:09:35.382: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=true. Elapsed: 44.91786ms Jan 29 22:09:35.382: INFO: Getting external IP address for bootstrap-e2e-minion-group-prl8 Jan 29 22:09:35.382: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.382: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-prl8(35.197.11.253:22) Jan 29 22:09:35.382: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=true. Elapsed: 45.142468ms Jan 29 22:09:35.382: INFO: Pod "metadata-proxy-v0.1-n78nd" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.382: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:09:35.382: INFO: Getting external IP address for bootstrap-e2e-minion-group-qp90 Jan 29 22:09:35.382: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-qp90(34.82.19.122:22) Jan 29 22:09:35.886: INFO: ssh prow@35.247.69.167:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 22:09:35.886: INFO: ssh prow@35.247.69.167:22: stdout: "" Jan 29 22:09:35.886: INFO: ssh prow@35.247.69.167:22: stderr: "" Jan 29 22:09:35.886: INFO: ssh prow@35.247.69.167:22: exit code: 0 Jan 29 22:09:35.886: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-0h23 condition Ready to be false Jan 29 22:09:35.907: INFO: ssh prow@34.82.19.122:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 22:09:35.907: INFO: ssh prow@34.82.19.122:22: stdout: "" Jan 29 22:09:35.907: INFO: ssh prow@34.82.19.122:22: stderr: "" Jan 29 22:09:35.907: INFO: ssh prow@34.82.19.122:22: exit code: 0 Jan 29 22:09:35.907: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-qp90 condition Ready to be false Jan 29 22:09:35.907: INFO: ssh prow@35.197.11.253:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 22:09:35.907: INFO: ssh prow@35.197.11.253:22: stdout: "" Jan 29 22:09:35.907: INFO: ssh prow@35.197.11.253:22: stderr: "" Jan 29 22:09:35.907: INFO: ssh prow@35.197.11.253:22: exit code: 0 Jan 29 22:09:35.907: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-prl8 condition Ready to be false Jan 29 22:09:35.928: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:35.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:35.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:37.970: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:37.992: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:37.992: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:40.015: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:40.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:40.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:42.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:42.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:42.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:44.111: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:44.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:44.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:46.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:46.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:46.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:48.196: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:48.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:48.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:50.243: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:50.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:50.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:52.286: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:52.304: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:52.305: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:54.330: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:54.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:54.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:56.372: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:56.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:56.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:58.415: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:58.438: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:58.438: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:00.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:00.481: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:00.481: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:02.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:02.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:02.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:04.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:04.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:04.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:06.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:06.613: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:06.613: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:08.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:08.659: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:08.659: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:10.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:10.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:10.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:12.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:12.745: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:12.745: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:14.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:14.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:14.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:16.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:16.839: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:16.839: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:18.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:18.882: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-qp90 condition Ready to be true Jan 29 22:10:18.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:18.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:20.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:20.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:20.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:47.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:47.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:47.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:49.820: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:49.820: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:49.820: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:51.865: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:51.865: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:51.865: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:53.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:53.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:53.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:55.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:55.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:55.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:58.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:58.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:58.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:00.049: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:00.049: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:00.049: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:02.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:02.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:02.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:04.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:04.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:04.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:06.184: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:06.184: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:06.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:08.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:08.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:08.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:10.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:10.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:10.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:12.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:12.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:12.325: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:14.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:14.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:14.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:16.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:16.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:16.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:18.454: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:18.455: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:18.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:20.498: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:20.498: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:20.499: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:22.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:22.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:22.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:24.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:24.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:24.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:26.645: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:26.645: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:26.645: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:28.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:28.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:28.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:30.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:30.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:30.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:32.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:32.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:32.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:34.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:34.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:34.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:36.828: INFO: Node bootstrap-e2e-minion-group-prl8 didn't reach desired Ready condition status (false) within 2m0s Jan 29 22:11:36.828: INFO: Node bootstrap-e2e-minion-group-0h23 didn't reach desired Ready condition status (false) within 2m0s Jan 29 22:11:36.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:38.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:40.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:42.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:45.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:47.085: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:11:47.085: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-n78nd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:47.085: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qp90" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:47.127: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=true. Elapsed: 42.389576ms Jan 29 22:11:47.127: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90" satisfied condition "running and ready, or succeeded" Jan 29 22:11:47.128: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=true. Elapsed: 42.566181ms Jan 29 22:11:47.128: INFO: Pod "metadata-proxy-v0.1-n78nd" satisfied condition "running and ready, or succeeded" Jan 29 22:11:47.128: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:11:47.128: INFO: Reboot successful on node bootstrap-e2e-minion-group-qp90 Jan 29 22:11:47.128: INFO: Node bootstrap-e2e-minion-group-0h23 failed reboot test. Jan 29 22:11:47.128: INFO: Node bootstrap-e2e-minion-group-prl8 failed reboot test. Jan 29 22:11:47.128: INFO: Executing termination hook on nodes Jan 29 22:11:47.128: INFO: Getting external IP address for bootstrap-e2e-minion-group-0h23 Jan 29 22:11:47.128: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-0h23(35.247.69.167:22) Jan 29 22:11:47.647: INFO: ssh prow@35.247.69.167:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 22:11:47.647: INFO: ssh prow@35.247.69.167:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 22:09:45 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 22:11:47.647: INFO: ssh prow@35.247.69.167:22: stderr: "" Jan 29 22:11:47.647: INFO: ssh prow@35.247.69.167:22: exit code: 0 Jan 29 22:11:47.647: INFO: Getting external IP address for bootstrap-e2e-minion-group-prl8 Jan 29 22:11:47.647: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-prl8(35.197.11.253:22) Jan 29 22:11:48.172: INFO: ssh prow@35.197.11.253:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 22:11:48.172: INFO: ssh prow@35.197.11.253:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 22:09:45 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 22:11:48.172: INFO: ssh prow@35.197.11.253:22: stderr: "" Jan 29 22:11:48.172: INFO: ssh prow@35.197.11.253:22: exit code: 0 Jan 29 22:11:48.172: INFO: Getting external IP address for bootstrap-e2e-minion-group-qp90 Jan 29 22:11:48.172: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-qp90(34.82.19.122:22) Jan 29 22:11:48.693: INFO: ssh prow@34.82.19.122:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 22:11:48.693: INFO: ssh prow@34.82.19.122:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 22:09:45 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 22:11:48.693: INFO: ssh prow@34.82.19.122:22: stderr: "" Jan 29 22:11:48.693: INFO: ssh prow@34.82.19.122:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 22:11:48.693 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 22:11:48.693 (2m13.587s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 22:11:48.693 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 22:11:48.693 Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-67jtp to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.461832289s (2.461840828s including waiting) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.2:8181/ready": dial tcp 10.64.0.2:8181: connect: connection refused Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.22:8181/ready": dial tcp 10.64.0.22:8181: connect: connection refused Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.22:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-67jtp_kube-system(72ca1a62-bb47-4fdd-8565-8cdea1e5a00a) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.28:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-q6pbg to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-q6pbg_kube-system(ec9db715-1c3c-452f-a7b0-808a6256b618) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.26:8181/ready": dial tcp 10.64.0.26:8181: connect: connection refused Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.26:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-q6pbg_kube-system(ec9db715-1c3c-452f-a7b0-808a6256b618) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.29:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-67jtp Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-q6pbg Jan 29 22:11:48.742: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 22:11:48.742: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 22:11:48.742: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 22:11:48.742: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 22:11:48.742: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 22:11:48.742: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 22:11:48.742: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 22:11:48.742: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 22:11:48.742: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 22:11:48.742: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 22:11:48.742: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 22:11:48.742: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 22:11:48.742: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_fd4b became leader Jan 29 22:11:48.742: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_55acf became leader Jan 29 22:11:48.742: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ad28a became leader Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-68c9g to bootstrap-e2e-minion-group-prl8 Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 657.613501ms (657.634978ms including waiting) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-c8fqq to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 956.296756ms (956.305606ms including waiting) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-c8fqq_kube-system(0836b571-aa7d-46e2-846d-c2ef4dcbfd76) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Liveness probe failed: Get "http://10.64.0.25:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Failed: Error: failed to get sandbox container task: no running task found: task b2c0d64625e18667eee1d0a95e38a58d19d52df858184ed33ed54f65ddc2f556 not found: not found Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-c8fqq_kube-system(0836b571-aa7d-46e2-846d-c2ef4dcbfd76) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-srg78 to bootstrap-e2e-minion-group-qp90 Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 679.018448ms (679.041957ms including waiting) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-srg78_kube-system(e0557a1e-0314-4bfe-8bff-7b1532b1bc85) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "http://10.64.3.10:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-c8fqq Jan 29 22:11:48.742: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-srg78 Jan 29 22:11:48.742: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-68c9g Jan 29 22:11:48.742: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 22:11:48.742: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 22:11:48.742: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 22:11:48.742: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 22:11:48.742: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 22:11:48.742: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 22:11:48.742: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 22:11:48.742: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 22:11:48.742: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 22:11:48.742: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 22:11:48.742: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 22:11:48.742: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused Jan 29 22:11:48.742: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_17b47e1a-c3ff-42ad-b566-12beffed0288 became leader Jan 29 22:11:48.742: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a96406e5-1a2d-415b-8674-47808fdfe3fe became leader Jan 29 22:11:48.742: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_12be7f8d-96f2-4959-9cf6-ed72d48a5404 became leader Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-8w5rj to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.575856713s (2.575872946s including waiting) Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container autoscaler Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-8w5rj_kube-system(7b9fb270-f42e-4c3d-9947-2b7804b28b97) Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container autoscaler Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-8w5rj_kube-system(7b9fb270-f42e-4c3d-9947-2b7804b28b97) Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-8w5rj Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0h23_kube-system(a7d7c673a5678c3fd05bb8d81e613fd2) Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-prl8_kube-system(af7f7d5ac5e113eedfb5c13ec70c059c) Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qp90_kube-system(fdc7414ccaf4c7060bb3a896ee9c4fdc) Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 22:11:48.742: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 22:11:48.742: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 22:11:48.742: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 22:11:48.742: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e5aa9ff1-292b-44e6-a72b-8735e76d222a became leader Jan 29 22:11:48.742: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_68b1b904-ad42-431c-80bb-86195fbcd230 became leader Jan 29 22:11:48.742: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_65313fb6-cd85-4780-9c60-766a799fefea became leader Jan 29 22:11:48.742: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4b1c330c-d507-49e9-bb07-682f604268de became leader Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-br722 to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.254994621s (1.255003973s including waiting) Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container default-http-backend Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container default-http-backend Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Liveness probe failed: Get "http://10.64.0.23:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-br722 Jan 29 22:11:48.742: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 22:11:48.742: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 22:11:48.742: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 22:11:48.742: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 22:11:48.742: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 22:11:48.742: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 22:11:48.742: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-7h8xr to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 728.14263ms (728.154201ms including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.813378152s (1.81340007s including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-gjgkr to bootstrap-e2e-minion-group-prl8 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 725.023258ms (725.04726ms including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.833322514s (1.833331253s including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-n78nd to bootstrap-e2e-minion-group-qp90 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 789.594528ms (789.609762ms including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.896285117s (1.896293813s including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-phrn6 to bootstrap-e2e-master Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 638.236648ms (638.252765ms including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.561997891s (1.56200326s including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-7h8xr Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-phrn6 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-gjgkr Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-n78nd Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-858xc to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.198313689s (3.198321554s including waiting) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.812916392s (3.812924842s including waiting) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-858xc Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-858xc Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-qmbs6 to bootstrap-e2e-minion-group-qp90 Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.353709849s (1.353731831s including waiting) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.015217229s (1.01523164s including waiting) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": read tcp 10.64.3.1:36350->10.64.3.3:10250: read: connection reset by peer Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Failed: Error: failed to get sandbox container task: no running task found: task 4ac2767f3e99f3d72489c6f4ac8b5d5588d1b55aca1cdd3beefe33bfd1fb8c2e not found: not found Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.8:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.8:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-qmbs6_kube-system(44703c8b-4289-449f-8dce-96f50d686272) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-qmbs6 Jan 29 22:11:48.743: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 22:11:48.743: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 22:11:48.743: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.290617862s (2.290627616s including waiting) Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container volume-snapshot-controller Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container volume-snapshot-controller Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container volume-snapshot-controller Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(1b9daa28-15d1-49b3-a153-e62f36714b55) Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container volume-snapshot-controller Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container volume-snapshot-controller Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container volume-snapshot-controller Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(1b9daa28-15d1-49b3-a153-e62f36714b55) Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 22:11:48.743 (50ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 22:11:48.743 Jan 29 22:11:48.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 22:11:48.785 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 22:11:48.785 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 22:11:48.785 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 22:11:48.785 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 22:11:48.785 STEP: Collecting events from namespace "reboot-5428". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 22:11:48.785 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 22:11:48.826 Jan 29 22:11:48.869: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 22:11:48.869: INFO: Jan 29 22:11:48.911: INFO: Logging node info for node bootstrap-e2e-master Jan 29 22:11:48.956: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master b2fbf9c6-a8ad-4945-a5e2-052805da66e2 1981 0 2023-01-29 22:00:49 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 22:01:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 22:01:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 22:11:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:19 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:19 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:19 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:11:19 +0000 UTC,LastTransitionTime:2023-01-29 22:00:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.220.45,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0fdb3cfe29f66637553465718381a2f8,SystemUUID:0fdb3cfe-29f6-6637-5534-65718381a2f8,BootID:6f3f19cb-1b2d-43f1-a98c-6f2c40560047,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:11:48.956: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 22:11:49.005: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 22:11:49.047: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Jan 29 22:11:49.047: INFO: Logging node info for node bootstrap-e2e-minion-group-0h23 Jan 29 22:11:49.089: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0h23 4bc52c5d-d6ac-4b10-a791-0f46bb41bbe0 1611 0 2023-01-29 22:00:45 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0h23 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 22:06:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 22:06:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 22:06:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 22:06:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-0h23,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:00:54 +0000 UTC,LastTransitionTime:2023-01-29 22:00:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.69.167,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0h23.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0h23.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8b143884b0552b595cbcfc83ba2dba58,SystemUUID:8b143884-b055-2b59-5cbc-fc83ba2dba58,BootID:65064e71-361a-40cd-9ae4-21f18d6bad09,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:11:49.090: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0h23 Jan 29 22:11:49.134: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0h23 Jan 29 22:11:49.177: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-0h23: error trying to reach service: No agent available Jan 29 22:11:49.177: INFO: Logging node info for node bootstrap-e2e-minion-group-prl8 Jan 29 22:11:49.218: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-prl8 dc1f933b-530d-4900-80bb-fdebf917515a 1632 0 2023-01-29 22:00:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-prl8 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 22:06:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 22:06:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 22:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 22:07:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-prl8,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:06:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:06:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:06:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:07:00 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.11.253,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-prl8.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-prl8.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e4ee97ed1426b2932671b760c0a7fcdd,SystemUUID:e4ee97ed-1426-b293-2671-b760c0a7fcdd,BootID:52efc0e9-9d9e-407b-ac2d-5f66c05ac932,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:11:49.219: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-prl8 Jan 29 22:11:49.263: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-prl8 Jan 29 22:11:49.305: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-prl8: error trying to reach service: No agent available Jan 29 22:11:49.305: INFO: Logging node info for node bootstrap-e2e-minion-group-qp90 Jan 29 22:11:49.347: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-qp90 6a45fc18-dedd-4084-96e2-e6ff57e70a04 2004 0 2023-01-29 22:00:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-qp90 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-01-29 22:07:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 22:10:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 22:10:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 22:11:45 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-qp90,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2023-01-29 22:10:18 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:45 +0000 UTC,LastTransitionTime:2023-01-29 22:11:45 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:45 +0000 UTC,LastTransitionTime:2023-01-29 22:11:45 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:45 +0000 UTC,LastTransitionTime:2023-01-29 22:11:45 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:11:45 +0000 UTC,LastTransitionTime:2023-01-29 22:11:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.82.19.122,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-qp90.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-qp90.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f468cde0139c49621ce815c9f02c0393,SystemUUID:f468cde0-139c-4962-1ce8-15c9f02c0393,BootID:632f7d0e-dfe9-46a1-91f4-d61d6e33f868,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:11:49.347: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-qp90 Jan 29 22:11:49.394: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-qp90 Jan 29 22:11:49.436: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-qp90: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 22:11:49.436 (651ms) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 22:11:49.437 (651ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 22:11:49.437 STEP: Destroying namespace "reboot-5428" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 22:11:49.437 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 22:11:49.479 (42ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 22:11:49.479 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 22:11:49.479 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 22:11:48.693from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 22:09:34.792 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 22:09:34.792 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 22:09:34.792 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 22:09:34.792 Jan 29 22:09:34.792: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 22:09:34.793 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 22:09:34.943 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 22:09:35.024 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 22:09:35.106 (314ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 22:09:35.106 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 22:09:35.106 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 22:09:35.106 Jan 29 22:09:35.202: INFO: Getting bootstrap-e2e-minion-group-0h23 Jan 29 22:09:35.252: INFO: Getting bootstrap-e2e-minion-group-prl8 Jan 29 22:09:35.252: INFO: Getting bootstrap-e2e-minion-group-qp90 Jan 29 22:09:35.275: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-0h23 condition Ready to be true Jan 29 22:09:35.295: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-qp90 condition Ready to be true Jan 29 22:09:35.295: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-prl8 condition Ready to be true Jan 29 22:09:35.319: INFO: Node bootstrap-e2e-minion-group-0h23 has 4 assigned pods with no liveness probes: [metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23] Jan 29 22:09:35.319: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23] Jan 29 22:09:35.319: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-0h23" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.319: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.319: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-8w5rj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.319: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-7h8xr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.336: INFO: Node bootstrap-e2e-minion-group-qp90 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:09:35.336: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:09:35.336: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-n78nd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.336: INFO: Node bootstrap-e2e-minion-group-prl8 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:09:35.336: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:09:35.336: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-gjgkr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.337: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-prl8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.337: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qp90" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:09:35.362: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0h23": Phase="Running", Reason="", readiness=true. Elapsed: 42.53501ms Jan 29 22:09:35.362: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0h23" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.363: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 43.332106ms Jan 29 22:09:35.363: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.364: INFO: Pod "kube-dns-autoscaler-5f6455f985-8w5rj": Phase="Running", Reason="", readiness=true. Elapsed: 44.551507ms Jan 29 22:09:35.364: INFO: Pod "kube-dns-autoscaler-5f6455f985-8w5rj" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.364: INFO: Pod "metadata-proxy-v0.1-7h8xr": Phase="Running", Reason="", readiness=true. Elapsed: 44.54157ms Jan 29 22:09:35.364: INFO: Pod "metadata-proxy-v0.1-7h8xr" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.364: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23] Jan 29 22:09:35.364: INFO: Getting external IP address for bootstrap-e2e-minion-group-0h23 Jan 29 22:09:35.364: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-0h23(35.247.69.167:22) Jan 29 22:09:35.381: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8": Phase="Running", Reason="", readiness=true. Elapsed: 44.890331ms Jan 29 22:09:35.381: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.381: INFO: Pod "metadata-proxy-v0.1-gjgkr": Phase="Running", Reason="", readiness=true. Elapsed: 45.03804ms Jan 29 22:09:35.381: INFO: Pod "metadata-proxy-v0.1-gjgkr" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.382: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:09:35.382: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=true. Elapsed: 44.91786ms Jan 29 22:09:35.382: INFO: Getting external IP address for bootstrap-e2e-minion-group-prl8 Jan 29 22:09:35.382: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.382: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-prl8(35.197.11.253:22) Jan 29 22:09:35.382: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=true. Elapsed: 45.142468ms Jan 29 22:09:35.382: INFO: Pod "metadata-proxy-v0.1-n78nd" satisfied condition "running and ready, or succeeded" Jan 29 22:09:35.382: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:09:35.382: INFO: Getting external IP address for bootstrap-e2e-minion-group-qp90 Jan 29 22:09:35.382: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-qp90(34.82.19.122:22) Jan 29 22:09:35.886: INFO: ssh prow@35.247.69.167:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 22:09:35.886: INFO: ssh prow@35.247.69.167:22: stdout: "" Jan 29 22:09:35.886: INFO: ssh prow@35.247.69.167:22: stderr: "" Jan 29 22:09:35.886: INFO: ssh prow@35.247.69.167:22: exit code: 0 Jan 29 22:09:35.886: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-0h23 condition Ready to be false Jan 29 22:09:35.907: INFO: ssh prow@34.82.19.122:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 22:09:35.907: INFO: ssh prow@34.82.19.122:22: stdout: "" Jan 29 22:09:35.907: INFO: ssh prow@34.82.19.122:22: stderr: "" Jan 29 22:09:35.907: INFO: ssh prow@34.82.19.122:22: exit code: 0 Jan 29 22:09:35.907: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-qp90 condition Ready to be false Jan 29 22:09:35.907: INFO: ssh prow@35.197.11.253:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 22:09:35.907: INFO: ssh prow@35.197.11.253:22: stdout: "" Jan 29 22:09:35.907: INFO: ssh prow@35.197.11.253:22: stderr: "" Jan 29 22:09:35.907: INFO: ssh prow@35.197.11.253:22: exit code: 0 Jan 29 22:09:35.907: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-prl8 condition Ready to be false Jan 29 22:09:35.928: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:35.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:35.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:37.970: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:37.992: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:37.992: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:40.015: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:40.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:40.038: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:42.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:42.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:42.085: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:44.111: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:44.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:44.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:46.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:46.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:46.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:48.196: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:48.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:48.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:50.243: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:50.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:50.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:52.286: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:52.304: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:52.305: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:54.330: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:54.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:54.348: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:56.372: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:56.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:56.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:58.415: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:58.438: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:58.438: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:00.458: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:00.481: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:00.481: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:02.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:02.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:02.526: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:04.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:04.569: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:04.570: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:06.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:06.613: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:06.613: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:08.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:08.659: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:08.659: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:10.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:10.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:10.702: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:12.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:12.745: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:12.745: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:14.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:14.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:14.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:16.832: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:16.839: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:16.839: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:18.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:18.882: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-qp90 condition Ready to be true Jan 29 22:10:18.882: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:18.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:20.918: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:20.924: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:20.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:47.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:47.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:47.772: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:49.820: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:49.820: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:49.820: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:51.865: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:51.865: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:51.865: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:53.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:53.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:53.911: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:55.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:55.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:10:55.956: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:58.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:58.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:10:58.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:00.049: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:00.049: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:00.049: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:02.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:02.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:02.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:04.140: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:04.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:04.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:06.184: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:06.184: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:06.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:08.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:08.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:08.235: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:10.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:10.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:10.280: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:12.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:12.324: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:12.325: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:14.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:14.368: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:14.369: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:16.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:16.411: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:16.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:18.454: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:18.455: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:18.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:20.498: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:20.498: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:20.499: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:22.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:22.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:22.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:24.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:24.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:24.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:26.645: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:26.645: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:26.645: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:28.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:28.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:28.690: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:30.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:30.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:30.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:32.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:32.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:32.781: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:34.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:34.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:34.827: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:36.828: INFO: Node bootstrap-e2e-minion-group-prl8 didn't reach desired Ready condition status (false) within 2m0s Jan 29 22:11:36.828: INFO: Node bootstrap-e2e-minion-group-0h23 didn't reach desired Ready condition status (false) within 2m0s Jan 29 22:11:36.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:38.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:40.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:42.999: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:45.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:11:47.085: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:11:47.085: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-n78nd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:47.085: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qp90" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:47.127: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=true. Elapsed: 42.389576ms Jan 29 22:11:47.127: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90" satisfied condition "running and ready, or succeeded" Jan 29 22:11:47.128: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=true. Elapsed: 42.566181ms Jan 29 22:11:47.128: INFO: Pod "metadata-proxy-v0.1-n78nd" satisfied condition "running and ready, or succeeded" Jan 29 22:11:47.128: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:11:47.128: INFO: Reboot successful on node bootstrap-e2e-minion-group-qp90 Jan 29 22:11:47.128: INFO: Node bootstrap-e2e-minion-group-0h23 failed reboot test. Jan 29 22:11:47.128: INFO: Node bootstrap-e2e-minion-group-prl8 failed reboot test. Jan 29 22:11:47.128: INFO: Executing termination hook on nodes Jan 29 22:11:47.128: INFO: Getting external IP address for bootstrap-e2e-minion-group-0h23 Jan 29 22:11:47.128: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-0h23(35.247.69.167:22) Jan 29 22:11:47.647: INFO: ssh prow@35.247.69.167:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 22:11:47.647: INFO: ssh prow@35.247.69.167:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 22:09:45 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 22:11:47.647: INFO: ssh prow@35.247.69.167:22: stderr: "" Jan 29 22:11:47.647: INFO: ssh prow@35.247.69.167:22: exit code: 0 Jan 29 22:11:47.647: INFO: Getting external IP address for bootstrap-e2e-minion-group-prl8 Jan 29 22:11:47.647: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-prl8(35.197.11.253:22) Jan 29 22:11:48.172: INFO: ssh prow@35.197.11.253:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 22:11:48.172: INFO: ssh prow@35.197.11.253:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 22:09:45 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 22:11:48.172: INFO: ssh prow@35.197.11.253:22: stderr: "" Jan 29 22:11:48.172: INFO: ssh prow@35.197.11.253:22: exit code: 0 Jan 29 22:11:48.172: INFO: Getting external IP address for bootstrap-e2e-minion-group-qp90 Jan 29 22:11:48.172: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-qp90(34.82.19.122:22) Jan 29 22:11:48.693: INFO: ssh prow@34.82.19.122:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 22:11:48.693: INFO: ssh prow@34.82.19.122:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 22:09:45 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 22:11:48.693: INFO: ssh prow@34.82.19.122:22: stderr: "" Jan 29 22:11:48.693: INFO: ssh prow@34.82.19.122:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 22:11:48.693 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 22:11:48.693 (2m13.587s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 22:11:48.693 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 22:11:48.693 Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-67jtp to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.461832289s (2.461840828s including waiting) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.2:8181/ready": dial tcp 10.64.0.2:8181: connect: connection refused Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.22:8181/ready": dial tcp 10.64.0.22:8181: connect: connection refused Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.22:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-67jtp_kube-system(72ca1a62-bb47-4fdd-8565-8cdea1e5a00a) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.28:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-q6pbg to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-q6pbg_kube-system(ec9db715-1c3c-452f-a7b0-808a6256b618) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.26:8181/ready": dial tcp 10.64.0.26:8181: connect: connection refused Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.26:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-q6pbg_kube-system(ec9db715-1c3c-452f-a7b0-808a6256b618) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.29:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-67jtp Jan 29 22:11:48.742: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-q6pbg Jan 29 22:11:48.742: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 22:11:48.742: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 22:11:48.742: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 22:11:48.742: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 22:11:48.742: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 22:11:48.742: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 22:11:48.742: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 22:11:48.742: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 22:11:48.742: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 22:11:48.742: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 22:11:48.742: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 22:11:48.742: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 22:11:48.742: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_fd4b became leader Jan 29 22:11:48.742: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_55acf became leader Jan 29 22:11:48.742: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ad28a became leader Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-68c9g to bootstrap-e2e-minion-group-prl8 Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 657.613501ms (657.634978ms including waiting) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-c8fqq to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 956.296756ms (956.305606ms including waiting) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-c8fqq_kube-system(0836b571-aa7d-46e2-846d-c2ef4dcbfd76) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Liveness probe failed: Get "http://10.64.0.25:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Failed: Error: failed to get sandbox container task: no running task found: task b2c0d64625e18667eee1d0a95e38a58d19d52df858184ed33ed54f65ddc2f556 not found: not found Jan 29 22:11:48.742: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-c8fqq_kube-system(0836b571-aa7d-46e2-846d-c2ef4dcbfd76) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-srg78 to bootstrap-e2e-minion-group-qp90 Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 679.018448ms (679.041957ms including waiting) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container konnectivity-agent Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-srg78_kube-system(e0557a1e-0314-4bfe-8bff-7b1532b1bc85) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "http://10.64.3.10:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 22:11:48.742: INFO: event for konnectivity-agent-srg78: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-c8fqq Jan 29 22:11:48.742: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-srg78 Jan 29 22:11:48.742: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-68c9g Jan 29 22:11:48.742: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 22:11:48.742: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 22:11:48.742: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 22:11:48.742: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 22:11:48.742: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 22:11:48.742: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 22:11:48.742: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 22:11:48.742: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 22:11:48.742: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 22:11:48.742: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 22:11:48.742: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 22:11:48.742: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 22:11:48.742: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused Jan 29 22:11:48.742: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_17b47e1a-c3ff-42ad-b566-12beffed0288 became leader Jan 29 22:11:48.742: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a96406e5-1a2d-415b-8674-47808fdfe3fe became leader Jan 29 22:11:48.742: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_12be7f8d-96f2-4959-9cf6-ed72d48a5404 became leader Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-8w5rj to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.575856713s (2.575872946s including waiting) Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container autoscaler Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-8w5rj_kube-system(7b9fb270-f42e-4c3d-9947-2b7804b28b97) Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container autoscaler Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-8w5rj_kube-system(7b9fb270-f42e-4c3d-9947-2b7804b28b97) Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-8w5rj Jan 29 22:11:48.742: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0h23_kube-system(a7d7c673a5678c3fd05bb8d81e613fd2) Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-prl8_kube-system(af7f7d5ac5e113eedfb5c13ec70c059c) Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container kube-proxy Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qp90_kube-system(fdc7414ccaf4c7060bb3a896ee9c4fdc) Jan 29 22:11:48.742: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:11:48.742: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 22:11:48.742: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 22:11:48.742: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 22:11:48.742: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 22:11:48.742: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e5aa9ff1-292b-44e6-a72b-8735e76d222a became leader Jan 29 22:11:48.742: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_68b1b904-ad42-431c-80bb-86195fbcd230 became leader Jan 29 22:11:48.742: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_65313fb6-cd85-4780-9c60-766a799fefea became leader Jan 29 22:11:48.742: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4b1c330c-d507-49e9-bb07-682f604268de became leader Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-br722 to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.254994621s (1.255003973s including waiting) Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container default-http-backend Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container default-http-backend Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Liveness probe failed: Get "http://10.64.0.23:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 22:11:48.742: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-br722 Jan 29 22:11:48.742: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 22:11:48.742: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 22:11:48.742: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 22:11:48.742: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 22:11:48.742: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 22:11:48.742: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 22:11:48.742: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-7h8xr to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 728.14263ms (728.154201ms including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.813378152s (1.81340007s including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-gjgkr to bootstrap-e2e-minion-group-prl8 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 725.023258ms (725.04726ms including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.833322514s (1.833331253s including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-n78nd to bootstrap-e2e-minion-group-qp90 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 789.594528ms (789.609762ms including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.896285117s (1.896293813s including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-n78nd: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-phrn6 to bootstrap-e2e-master Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 638.236648ms (638.252765ms including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.561997891s (1.56200326s including waiting) Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-7h8xr Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-phrn6 Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-gjgkr Jan 29 22:11:48.742: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-n78nd Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-858xc to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.198313689s (3.198321554s including waiting) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.812916392s (3.812924842s including waiting) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-858xc Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-858xc Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-qmbs6 to bootstrap-e2e-minion-group-qp90 Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.353709849s (1.353731831s including waiting) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.015217229s (1.01523164s including waiting) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": read tcp 10.64.3.1:36350->10.64.3.3:10250: read: connection reset by peer Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Failed: Error: failed to get sandbox container task: no running task found: task 4ac2767f3e99f3d72489c6f4ac8b5d5588d1b55aca1cdd3beefe33bfd1fb8c2e not found: not found Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.8:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.8:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-qmbs6_kube-system(44703c8b-4289-449f-8dce-96f50d686272) Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server-nanny Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server Jan 29 22:11:48.742: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-qmbs6 Jan 29 22:11:48.743: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 22:11:48.743: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 22:11:48.743: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-0h23 Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.290617862s (2.290627616s including waiting) Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container volume-snapshot-controller Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container volume-snapshot-controller Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container volume-snapshot-controller Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(1b9daa28-15d1-49b3-a153-e62f36714b55) Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container volume-snapshot-controller Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container volume-snapshot-controller Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container volume-snapshot-controller Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(1b9daa28-15d1-49b3-a153-e62f36714b55) Jan 29 22:11:48.743: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 22:11:48.743 (50ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 22:11:48.743 Jan 29 22:11:48.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 22:11:48.785 (43ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 22:11:48.785 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 22:11:48.785 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 22:11:48.785 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 22:11:48.785 STEP: Collecting events from namespace "reboot-5428". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 22:11:48.785 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 22:11:48.826 Jan 29 22:11:48.869: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 22:11:48.869: INFO: Jan 29 22:11:48.911: INFO: Logging node info for node bootstrap-e2e-master Jan 29 22:11:48.956: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master b2fbf9c6-a8ad-4945-a5e2-052805da66e2 1981 0 2023-01-29 22:00:49 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 22:01:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 22:01:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 22:11:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:19 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:19 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:19 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:11:19 +0000 UTC,LastTransitionTime:2023-01-29 22:00:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.220.45,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0fdb3cfe29f66637553465718381a2f8,SystemUUID:0fdb3cfe-29f6-6637-5534-65718381a2f8,BootID:6f3f19cb-1b2d-43f1-a98c-6f2c40560047,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:11:48.956: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 22:11:49.005: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 22:11:49.047: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: No agent available Jan 29 22:11:49.047: INFO: Logging node info for node bootstrap-e2e-minion-group-0h23 Jan 29 22:11:49.089: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0h23 4bc52c5d-d6ac-4b10-a791-0f46bb41bbe0 1611 0 2023-01-29 22:00:45 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0h23 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 22:06:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 22:06:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 22:06:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 22:06:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-0h23,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:00:54 +0000 UTC,LastTransitionTime:2023-01-29 22:00:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.69.167,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0h23.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0h23.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8b143884b0552b595cbcfc83ba2dba58,SystemUUID:8b143884-b055-2b59-5cbc-fc83ba2dba58,BootID:65064e71-361a-40cd-9ae4-21f18d6bad09,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:11:49.090: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0h23 Jan 29 22:11:49.134: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0h23 Jan 29 22:11:49.177: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-0h23: error trying to reach service: No agent available Jan 29 22:11:49.177: INFO: Logging node info for node bootstrap-e2e-minion-group-prl8 Jan 29 22:11:49.218: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-prl8 dc1f933b-530d-4900-80bb-fdebf917515a 1632 0 2023-01-29 22:00:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-prl8 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 22:06:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 22:06:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 22:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 22:07:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-prl8,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:06:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:06:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:06:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:07:00 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.11.253,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-prl8.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-prl8.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e4ee97ed1426b2932671b760c0a7fcdd,SystemUUID:e4ee97ed-1426-b293-2671-b760c0a7fcdd,BootID:52efc0e9-9d9e-407b-ac2d-5f66c05ac932,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:11:49.219: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-prl8 Jan 29 22:11:49.263: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-prl8 Jan 29 22:11:49.305: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-prl8: error trying to reach service: No agent available Jan 29 22:11:49.305: INFO: Logging node info for node bootstrap-e2e-minion-group-qp90 Jan 29 22:11:49.347: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-qp90 6a45fc18-dedd-4084-96e2-e6ff57e70a04 2004 0 2023-01-29 22:00:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-qp90 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2023-01-29 22:07:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 22:10:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 22:10:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 22:11:45 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-qp90,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2023-01-29 22:10:18 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:45 +0000 UTC,LastTransitionTime:2023-01-29 22:11:45 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:45 +0000 UTC,LastTransitionTime:2023-01-29 22:11:45 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:45 +0000 UTC,LastTransitionTime:2023-01-29 22:11:45 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:11:45 +0000 UTC,LastTransitionTime:2023-01-29 22:11:45 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.82.19.122,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-qp90.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-qp90.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f468cde0139c49621ce815c9f02c0393,SystemUUID:f468cde0-139c-4962-1ce8-15c9f02c0393,BootID:632f7d0e-dfe9-46a1-91f4-d61d6e33f868,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:11:49.347: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-qp90 Jan 29 22:11:49.394: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-qp90 Jan 29 22:11:49.436: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-minion-group-qp90: error trying to reach service: No agent available END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 22:11:49.436 (651ms) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 22:11:49.437 (651ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 22:11:49.437 STEP: Destroying namespace "reboot-5428" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 22:11:49.437 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 22:11:49.479 (42ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 22:11:49.479 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 22:11:49.479 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 22:09:29.813from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 22:07:10.466 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 22:07:10.466 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 22:07:10.466 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 22:07:10.466 Jan 29 22:07:10.466: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 22:07:10.468 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 22:07:10.592 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 22:07:10.672 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 22:07:10.752 (286ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 22:07:10.752 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 22:07:10.752 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 22:07:10.752 Jan 29 22:07:10.847: INFO: Getting bootstrap-e2e-minion-group-prl8 Jan 29 22:07:10.847: INFO: Getting bootstrap-e2e-minion-group-0h23 Jan 29 22:07:10.847: INFO: Getting bootstrap-e2e-minion-group-qp90 Jan 29 22:07:10.921: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-prl8 condition Ready to be true Jan 29 22:07:10.921: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-qp90 condition Ready to be true Jan 29 22:07:10.921: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-0h23 condition Ready to be true Jan 29 22:07:10.965: INFO: Node bootstrap-e2e-minion-group-0h23 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23 metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0] Jan 29 22:07:10.965: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23 metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0] Jan 29 22:07:10.965: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.965: INFO: Node bootstrap-e2e-minion-group-prl8 has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-gjgkr kube-proxy-bootstrap-e2e-minion-group-prl8] Jan 29 22:07:10.965: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-gjgkr kube-proxy-bootstrap-e2e-minion-group-prl8] Jan 29 22:07:10.965: INFO: Node bootstrap-e2e-minion-group-qp90 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:07:10.965: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-prl8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.965: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:07:10.965: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-n78nd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.966: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-8w5rj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.966: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-gjgkr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.966: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-0h23" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.966: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-7h8xr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.966: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qp90" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:11.011: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 45.798402ms Jan 29 22:07:11.011: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.013: INFO: Pod "kube-dns-autoscaler-5f6455f985-8w5rj": Phase="Running", Reason="", readiness=true. Elapsed: 47.357758ms Jan 29 22:07:11.013: INFO: Pod "kube-dns-autoscaler-5f6455f985-8w5rj" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.013: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=true. Elapsed: 47.575291ms Jan 29 22:07:11.013: INFO: Pod "metadata-proxy-v0.1-n78nd" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.013: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=true. Elapsed: 47.474758ms Jan 29 22:07:11.013: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.013: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:07:11.013: INFO: Getting external IP address for bootstrap-e2e-minion-group-qp90 Jan 29 22:07:11.013: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-qp90(34.82.19.122:22) Jan 29 22:07:11.013: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8": Phase="Running", Reason="", readiness=true. Elapsed: 47.960757ms Jan 29 22:07:11.013: INFO: Pod "metadata-proxy-v0.1-7h8xr": Phase="Running", Reason="", readiness=true. Elapsed: 47.79725ms Jan 29 22:07:11.013: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.013: INFO: Pod "metadata-proxy-v0.1-7h8xr" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.013: INFO: Pod "metadata-proxy-v0.1-gjgkr": Phase="Running", Reason="", readiness=true. Elapsed: 47.91187ms Jan 29 22:07:11.014: INFO: Pod "metadata-proxy-v0.1-gjgkr" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.014: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-gjgkr kube-proxy-bootstrap-e2e-minion-group-prl8] Jan 29 22:07:11.014: INFO: Getting external IP address for bootstrap-e2e-minion-group-prl8 Jan 29 22:07:11.014: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-prl8(35.197.11.253:22) Jan 29 22:07:11.014: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0h23": Phase="Running", Reason="", readiness=true. Elapsed: 47.907218ms Jan 29 22:07:11.014: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0h23" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.014: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23 metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0] Jan 29 22:07:11.014: INFO: Getting external IP address for bootstrap-e2e-minion-group-0h23 Jan 29 22:07:11.014: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-0h23(35.247.69.167:22) Jan 29 22:07:11.570: INFO: ssh prow@34.82.19.122:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 22:07:11.570: INFO: ssh prow@34.82.19.122:22: stdout: "" Jan 29 22:07:11.570: INFO: ssh prow@34.82.19.122:22: stderr: "" Jan 29 22:07:11.570: INFO: ssh prow@34.82.19.122:22: exit code: 0 Jan 29 22:07:11.570: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-qp90 condition Ready to be false Jan 29 22:07:11.585: INFO: ssh prow@35.197.11.253:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 22:07:11.585: INFO: ssh prow@35.197.11.253:22: stdout: "" Jan 29 22:07:11.585: INFO: ssh prow@35.197.11.253:22: stderr: "" Jan 29 22:07:11.585: INFO: ssh prow@35.197.11.253:22: exit code: 0 Jan 29 22:07:11.585: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-prl8 condition Ready to be false Jan 29 22:07:11.587: INFO: ssh prow@35.247.69.167:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 22:07:11.587: INFO: ssh prow@35.247.69.167:22: stdout: "" Jan 29 22:07:11.587: INFO: ssh prow@35.247.69.167:22: stderr: "" Jan 29 22:07:11.587: INFO: ssh prow@35.247.69.167:22: exit code: 0 Jan 29 22:07:11.587: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-0h23 condition Ready to be false Jan 29 22:07:11.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:11.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:11.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:13.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:13.692: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:13.692: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:15.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:15.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:15.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:17.757: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:17.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:17.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:19.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:19.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:19.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:21.854: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:21.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:21.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:23.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:23.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:23.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:25.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:25.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:25.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:27.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:28.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:28.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:30.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:30.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:30.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:32.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:32.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:32.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:34.116: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:34.127: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:34.127: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:36.156: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:36.167: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:36.167: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:38.197: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:38.207: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:38.207: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:40.237: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:40.248: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:40.248: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:42.278: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:42.288: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:42.288: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:44.319: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:44.328: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:44.328: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:46.359: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:46.368: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:46.368: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:48.400: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:48.408: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:48.408: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:50.440: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:50.448: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:50.448: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:52.481: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:52.488: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:52.488: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:54.521: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:54.528: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:54.528: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:56.561: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:56.568: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:56.568: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:58.600: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:58.608: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:58.608: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:00.641: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:00.648: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:00.648: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:02.682: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:02.688: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:02.688: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:04.721: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:04.727: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:04.728: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 W0129 22:08:13.151926 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:13.151985 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:13.242: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:13.242: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:13.242: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 W0129 22:08:13.253228 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-prl8&resourceVersion=1712": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:13.253305 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-prl8&resourceVersion=1712": dial tcp 34.82.220.45:443: connect: connection refused W0129 22:08:13.603049 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:13.603098 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused W0129 22:08:15.000628 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:15.000687 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:15.282: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:15.282: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:15.282: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 W0129 22:08:15.811458 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-prl8&resourceVersion=1712": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:15.811507 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-prl8&resourceVersion=1712": dial tcp 34.82.220.45:443: connect: connection refused W0129 22:08:16.646816 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:16.646921 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:17.322: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:17.322: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:17.322: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 W0129 22:08:18.495865 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:18.495914 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:19.362: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:19.362: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:19.362: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 W0129 22:08:20.052290 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:20.052352 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:21.402: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:21.402: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:21.402: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 W0129 22:08:21.838532 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-prl8&resourceVersion=1712": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:21.838591 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-prl8&resourceVersion=1712": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:23.443: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:23.443: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:23.443: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:25.483: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:25.483: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:25.483: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:27.523: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:27.523: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:27.523: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:29.564: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:29.564: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:29.564: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 W0129 22:08:30.232492 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:30.232542 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused W0129 22:08:30.613755 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:30.613886 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:35.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:35.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:35.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:37.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:37.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:37.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:39.976: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:39.976: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:39.977: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:42.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:42.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:42.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:44.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:44.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:44.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:46.119: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:46.119: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:46.119: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:48.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:48.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:48.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:50.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:50.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:50.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:52.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:52.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:52.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:54.305: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:54.305: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:54.305: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:56.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:56.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:56.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:58.504: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:58.504: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:58.504: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:00.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:00.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:00.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:02.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:02.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:02.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:04.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:04.709: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:04.709: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:06.748: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:06.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:06.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:08.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:08.799: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:08.799: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:10.835: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:10.848: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:10.848: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:12.835: INFO: Node bootstrap-e2e-minion-group-qp90 didn't reach desired Ready condition status (false) within 2m0s Jan 29 22:09:12.849: INFO: Node bootstrap-e2e-minion-group-0h23 didn't reach desired Ready condition status (false) within 2m0s Jan 29 22:09:12.849: INFO: Node bootstrap-e2e-minion-group-prl8 didn't reach desired Ready condition status (false) within 2m0s Jan 29 22:09:12.849: INFO: Node bootstrap-e2e-minion-group-0h23 failed reboot test. Jan 29 22:09:12.849: INFO: Node bootstrap-e2e-minion-group-prl8 failed reboot test. Jan 29 22:09:12.849: INFO: Node bootstrap-e2e-minion-group-qp90 failed reboot test. Jan 29 22:09:12.849: INFO: Executing termination hook on nodes Jan 29 22:09:12.849: INFO: Getting external IP address for bootstrap-e2e-minion-group-0h23 Jan 29 22:09:12.849: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-0h23(35.247.69.167:22) Jan 29 22:09:28.764: INFO: ssh prow@35.247.69.167:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 22:09:28.764: INFO: ssh prow@35.247.69.167:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 22:07:21 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 22:09:28.764: INFO: ssh prow@35.247.69.167:22: stderr: "" Jan 29 22:09:28.764: INFO: ssh prow@35.247.69.167:22: exit code: 0 Jan 29 22:09:28.764: INFO: Getting external IP address for bootstrap-e2e-minion-group-prl8 Jan 29 22:09:28.764: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-prl8(35.197.11.253:22) Jan 29 22:09:29.292: INFO: ssh prow@35.197.11.253:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 22:09:29.292: INFO: ssh prow@35.197.11.253:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 22:07:21 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 22:09:29.292: INFO: ssh prow@35.197.11.253:22: stderr: "" Jan 29 22:09:29.292: INFO: ssh prow@35.197.11.253:22: exit code: 0 Jan 29 22:09:29.292: INFO: Getting external IP address for bootstrap-e2e-minion-group-qp90 Jan 29 22:09:29.292: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-qp90(34.82.19.122:22) Jan 29 22:09:29.813: INFO: ssh prow@34.82.19.122:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 22:09:29.813: INFO: ssh prow@34.82.19.122:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 22:07:21 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 22:09:29.813: INFO: ssh prow@34.82.19.122:22: stderr: "" Jan 29 22:09:29.813: INFO: ssh prow@34.82.19.122:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 22:09:29.813 < Exit [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 22:09:29.813 (2m19.061s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 22:09:29.813 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 22:09:29.813 Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-67jtp to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.461832289s (2.461840828s including waiting) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.2:8181/ready": dial tcp 10.64.0.2:8181: connect: connection refused Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.22:8181/ready": dial tcp 10.64.0.22:8181: connect: connection refused Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.22:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-67jtp_kube-system(72ca1a62-bb47-4fdd-8565-8cdea1e5a00a) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-q6pbg to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-q6pbg_kube-system(ec9db715-1c3c-452f-a7b0-808a6256b618) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.26:8181/ready": dial tcp 10.64.0.26:8181: connect: connection refused Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.26:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-q6pbg_kube-system(ec9db715-1c3c-452f-a7b0-808a6256b618) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.29:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-67jtp Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-q6pbg Jan 29 22:09:29.864: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 22:09:29.864: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 22:09:29.864: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 22:09:29.864: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 22:09:29.864: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 22:09:29.864: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 22:09:29.864: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 22:09:29.864: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 22:09:29.864: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 22:09:29.864: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 22:09:29.864: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 22:09:29.864: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_fd4b became leader Jan 29 22:09:29.864: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_55acf became leader Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-68c9g to bootstrap-e2e-minion-group-prl8 Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 657.613501ms (657.634978ms including waiting) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-c8fqq to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 956.296756ms (956.305606ms including waiting) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-c8fqq_kube-system(0836b571-aa7d-46e2-846d-c2ef4dcbfd76) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Liveness probe failed: Get "http://10.64.0.25:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-srg78 to bootstrap-e2e-minion-group-qp90 Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 679.018448ms (679.041957ms including waiting) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-srg78_kube-system(e0557a1e-0314-4bfe-8bff-7b1532b1bc85) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "http://10.64.3.10:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-c8fqq Jan 29 22:09:29.864: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-srg78 Jan 29 22:09:29.864: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-68c9g Jan 29 22:09:29.864: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 22:09:29.864: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 22:09:29.864: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 22:09:29.864: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 22:09:29.864: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 22:09:29.864: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 22:09:29.864: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 22:09:29.864: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 22:09:29.864: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 22:09:29.864: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 22:09:29.864: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 22:09:29.864: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 22:09:29.864: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_17b47e1a-c3ff-42ad-b566-12beffed0288 became leader Jan 29 22:09:29.864: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a96406e5-1a2d-415b-8674-47808fdfe3fe became leader Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-8w5rj to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.575856713s (2.575872946s including waiting) Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container autoscaler Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-8w5rj_kube-system(7b9fb270-f42e-4c3d-9947-2b7804b28b97) Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-8w5rj Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0h23_kube-system(a7d7c673a5678c3fd05bb8d81e613fd2) Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-prl8_kube-system(af7f7d5ac5e113eedfb5c13ec70c059c) Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qp90_kube-system(fdc7414ccaf4c7060bb3a896ee9c4fdc) Jan 29 22:09:29.864: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 22:09:29.864: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 22:09:29.864: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 22:09:29.864: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 22:09:29.864: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e5aa9ff1-292b-44e6-a72b-8735e76d222a became leader Jan 29 22:09:29.864: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_68b1b904-ad42-431c-80bb-86195fbcd230 became leader Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-br722 to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.254994621s (1.255003973s including waiting) Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container default-http-backend Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container default-http-backend Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-br722 Jan 29 22:09:29.864: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 22:09:29.864: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 22:09:29.864: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 22:09:29.864: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 22:09:29.864: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 22:09:29.864: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 22:09:29.864: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-7h8xr to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 728.14263ms (728.154201ms including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.813378152s (1.81340007s including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-gjgkr to bootstrap-e2e-minion-group-prl8 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 725.023258ms (725.04726ms including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.833322514s (1.833331253s including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-n78nd to bootstrap-e2e-minion-group-qp90 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 789.594528ms (789.609762ms including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.896285117s (1.896293813s including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-phrn6 to bootstrap-e2e-master Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 638.236648ms (638.252765ms including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.561997891s (1.56200326s including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-7h8xr Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-phrn6 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-gjgkr Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-n78nd Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-858xc to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.198313689s (3.198321554s including waiting) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.812916392s (3.812924842s including waiting) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-858xc Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-858xc Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-qmbs6 to bootstrap-e2e-minion-group-qp90 Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.353709849s (1.353731831s including waiting) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.015217229s (1.01523164s including waiting) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": read tcp 10.64.3.1:36350->10.64.3.3:10250: read: connection reset by peer Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Failed: Error: failed to get sandbox container task: no running task found: task 4ac2767f3e99f3d72489c6f4ac8b5d5588d1b55aca1cdd3beefe33bfd1fb8c2e not found: not found Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.8:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.8:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-qmbs6_kube-system(44703c8b-4289-449f-8dce-96f50d686272) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-qmbs6 Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.290617862s (2.290627616s including waiting) Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container volume-snapshot-controller Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container volume-snapshot-controller Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container volume-snapshot-controller Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(1b9daa28-15d1-49b3-a153-e62f36714b55) Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container volume-snapshot-controller Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container volume-snapshot-controller Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container volume-snapshot-controller Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(1b9daa28-15d1-49b3-a153-e62f36714b55) Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 22:09:29.864 (51ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 22:09:29.864 Jan 29 22:09:29.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 22:09:29.91 (46ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 22:09:29.91 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 22:09:29.91 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 22:09:29.91 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 22:09:29.91 STEP: Collecting events from namespace "reboot-4096". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 22:09:29.91 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 22:09:29.972 Jan 29 22:09:30.014: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 22:09:30.014: INFO: Jan 29 22:09:30.059: INFO: Logging node info for node bootstrap-e2e-master Jan 29 22:09:30.102: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master b2fbf9c6-a8ad-4945-a5e2-052805da66e2 1475 0 2023-01-29 22:00:49 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 22:01:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 22:01:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 22:06:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:15 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:15 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:15 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:06:15 +0000 UTC,LastTransitionTime:2023-01-29 22:00:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.220.45,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0fdb3cfe29f66637553465718381a2f8,SystemUUID:0fdb3cfe-29f6-6637-5534-65718381a2f8,BootID:6f3f19cb-1b2d-43f1-a98c-6f2c40560047,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:09:30.102: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 22:09:30.152: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 22:09:30.328: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container kube-apiserver ready: true, restart count 2 Jan 29 22:09:30.328: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 22:00:22 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container l7-lb-controller ready: false, restart count 4 Jan 29 22:09:30.328: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container etcd-container ready: true, restart count 1 Jan 29 22:09:30.328: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container konnectivity-server-container ready: true, restart count 0 Jan 29 22:09:30.328: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container kube-scheduler ready: false, restart count 2 Jan 29 22:09:30.328: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 22:00:22 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container kube-addon-manager ready: true, restart count 2 Jan 29 22:09:30.328: INFO: metadata-proxy-v0.1-phrn6 started at 2023-01-29 22:00:57 +0000 UTC (0+2 container statuses recorded) Jan 29 22:09:30.328: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 22:09:30.328: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 22:09:30.328: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container etcd-container ready: true, restart count 2 Jan 29 22:09:30.328: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 29 22:09:30.498: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 22:09:30.498: INFO: Logging node info for node bootstrap-e2e-minion-group-0h23 Jan 29 22:09:30.540: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0h23 4bc52c5d-d6ac-4b10-a791-0f46bb41bbe0 1611 0 2023-01-29 22:00:45 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0h23 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 22:06:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 22:06:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 22:06:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 22:06:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-0h23,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:00:54 +0000 UTC,LastTransitionTime:2023-01-29 22:00:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.69.167,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0h23.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0h23.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8b143884b0552b595cbcfc83ba2dba58,SystemUUID:8b143884-b055-2b59-5cbc-fc83ba2dba58,BootID:65064e71-361a-40cd-9ae4-21f18d6bad09,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:09:30.540: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0h23 Jan 29 22:09:30.585: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0h23 Jan 29 22:09:30.652: INFO: coredns-6846b5b5f-q6pbg started at 2023-01-29 22:01:02 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container coredns ready: true, restart count 4 Jan 29 22:09:30.652: INFO: kube-proxy-bootstrap-e2e-minion-group-0h23 started at 2023-01-29 22:00:45 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 22:09:30.652: INFO: l7-default-backend-8549d69d99-br722 started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container default-http-backend ready: true, restart count 2 Jan 29 22:09:30.652: INFO: coredns-6846b5b5f-67jtp started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container coredns ready: true, restart count 3 Jan 29 22:09:30.652: INFO: kube-dns-autoscaler-5f6455f985-8w5rj started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container autoscaler ready: true, restart count 5 Jan 29 22:09:30.652: INFO: volume-snapshot-controller-0 started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container volume-snapshot-controller ready: true, restart count 7 Jan 29 22:09:30.652: INFO: metadata-proxy-v0.1-7h8xr started at 2023-01-29 22:00:46 +0000 UTC (0+2 container statuses recorded) Jan 29 22:09:30.652: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 22:09:30.652: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 22:09:30.652: INFO: konnectivity-agent-c8fqq started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container konnectivity-agent ready: false, restart count 4 Jan 29 22:09:30.817: INFO: Latency metrics for node bootstrap-e2e-minion-group-0h23 Jan 29 22:09:30.817: INFO: Logging node info for node bootstrap-e2e-minion-group-prl8 Jan 29 22:09:30.859: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-prl8 dc1f933b-530d-4900-80bb-fdebf917515a 1632 0 2023-01-29 22:00:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-prl8 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 22:06:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 22:06:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 22:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 22:07:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-prl8,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:06:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:06:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:06:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:07:00 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.11.253,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-prl8.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-prl8.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e4ee97ed1426b2932671b760c0a7fcdd,SystemUUID:e4ee97ed-1426-b293-2671-b760c0a7fcdd,BootID:52efc0e9-9d9e-407b-ac2d-5f66c05ac932,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:09:30.859: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-prl8 Jan 29 22:09:30.908: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-prl8 Jan 29 22:09:30.969: INFO: konnectivity-agent-68c9g started at 2023-01-29 22:01:07 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.969: INFO: Container konnectivity-agent ready: false, restart count 4 Jan 29 22:09:30.969: INFO: kube-proxy-bootstrap-e2e-minion-group-prl8 started at 2023-01-29 22:00:50 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.969: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 22:09:30.969: INFO: metadata-proxy-v0.1-gjgkr started at 2023-01-29 22:00:51 +0000 UTC (0+2 container statuses recorded) Jan 29 22:09:30.969: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 22:09:30.969: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 22:09:31.131: INFO: Latency metrics for node bootstrap-e2e-minion-group-prl8 Jan 29 22:09:31.131: INFO: Logging node info for node bootstrap-e2e-minion-group-qp90 Jan 29 22:09:31.173: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-qp90 6a45fc18-dedd-4084-96e2-e6ff57e70a04 1674 0 2023-01-29 22:00:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-qp90 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 22:06:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 22:07:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 22:07:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 22:07:05 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-qp90,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:05 +0000 UTC,LastTransitionTime:2023-01-29 22:07:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:05 +0000 UTC,LastTransitionTime:2023-01-29 22:07:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:05 +0000 UTC,LastTransitionTime:2023-01-29 22:07:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:07:05 +0000 UTC,LastTransitionTime:2023-01-29 22:07:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.82.19.122,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-qp90.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-qp90.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f468cde0139c49621ce815c9f02c0393,SystemUUID:f468cde0-139c-4962-1ce8-15c9f02c0393,BootID:632f7d0e-dfe9-46a1-91f4-d61d6e33f868,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:09:31.173: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-qp90 Jan 29 22:09:31.217: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-qp90 Jan 29 22:09:31.290: INFO: kube-proxy-bootstrap-e2e-minion-group-qp90 started at 2023-01-29 22:00:52 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:31.290: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 22:09:31.290: INFO: metadata-proxy-v0.1-n78nd started at 2023-01-29 22:00:52 +0000 UTC (0+2 container statuses recorded) Jan 29 22:09:31.290: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 22:09:31.290: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 22:09:31.290: INFO: konnectivity-agent-srg78 started at 2023-01-29 22:01:07 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:31.290: INFO: Container konnectivity-agent ready: true, restart count 4 Jan 29 22:09:31.290: INFO: metrics-server-v0.5.2-867b8754b9-qmbs6 started at 2023-01-29 22:01:18 +0000 UTC (0+2 container statuses recorded) Jan 29 22:09:31.290: INFO: Container metrics-server ready: false, restart count 6 Jan 29 22:09:31.290: INFO: Container metrics-server-nanny ready: false, restart count 5 Jan 29 22:09:34.694: INFO: Latency metrics for node bootstrap-e2e-minion-group-qp90 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 22:09:34.694 (4.783s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 22:09:34.694 (4.784s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 22:09:34.694 STEP: Destroying namespace "reboot-4096" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 22:09:34.694 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 22:09:34.736 (43ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 22:09:34.736 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 22:09:34.736 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\soutbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 22:09:29.813from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 22:07:10.466 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 22:07:10.466 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 22:07:10.466 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 22:07:10.466 Jan 29 22:07:10.466: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 22:07:10.468 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 22:07:10.592 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 22:07:10.672 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 22:07:10.752 (286ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 22:07:10.752 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 22:07:10.752 (0s) > Enter [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 22:07:10.752 Jan 29 22:07:10.847: INFO: Getting bootstrap-e2e-minion-group-prl8 Jan 29 22:07:10.847: INFO: Getting bootstrap-e2e-minion-group-0h23 Jan 29 22:07:10.847: INFO: Getting bootstrap-e2e-minion-group-qp90 Jan 29 22:07:10.921: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-prl8 condition Ready to be true Jan 29 22:07:10.921: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-qp90 condition Ready to be true Jan 29 22:07:10.921: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-0h23 condition Ready to be true Jan 29 22:07:10.965: INFO: Node bootstrap-e2e-minion-group-0h23 has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23 metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0] Jan 29 22:07:10.965: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23 metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0] Jan 29 22:07:10.965: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.965: INFO: Node bootstrap-e2e-minion-group-prl8 has 2 assigned pods with no liveness probes: [metadata-proxy-v0.1-gjgkr kube-proxy-bootstrap-e2e-minion-group-prl8] Jan 29 22:07:10.965: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-gjgkr kube-proxy-bootstrap-e2e-minion-group-prl8] Jan 29 22:07:10.965: INFO: Node bootstrap-e2e-minion-group-qp90 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:07:10.965: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-prl8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.965: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:07:10.965: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-n78nd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.966: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-8w5rj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.966: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-gjgkr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.966: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-0h23" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.966: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-7h8xr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:10.966: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qp90" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:07:11.011: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 45.798402ms Jan 29 22:07:11.011: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.013: INFO: Pod "kube-dns-autoscaler-5f6455f985-8w5rj": Phase="Running", Reason="", readiness=true. Elapsed: 47.357758ms Jan 29 22:07:11.013: INFO: Pod "kube-dns-autoscaler-5f6455f985-8w5rj" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.013: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=true. Elapsed: 47.575291ms Jan 29 22:07:11.013: INFO: Pod "metadata-proxy-v0.1-n78nd" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.013: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=true. Elapsed: 47.474758ms Jan 29 22:07:11.013: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.013: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:07:11.013: INFO: Getting external IP address for bootstrap-e2e-minion-group-qp90 Jan 29 22:07:11.013: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-qp90(34.82.19.122:22) Jan 29 22:07:11.013: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8": Phase="Running", Reason="", readiness=true. Elapsed: 47.960757ms Jan 29 22:07:11.013: INFO: Pod "metadata-proxy-v0.1-7h8xr": Phase="Running", Reason="", readiness=true. Elapsed: 47.79725ms Jan 29 22:07:11.013: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.013: INFO: Pod "metadata-proxy-v0.1-7h8xr" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.013: INFO: Pod "metadata-proxy-v0.1-gjgkr": Phase="Running", Reason="", readiness=true. Elapsed: 47.91187ms Jan 29 22:07:11.014: INFO: Pod "metadata-proxy-v0.1-gjgkr" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.014: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-gjgkr kube-proxy-bootstrap-e2e-minion-group-prl8] Jan 29 22:07:11.014: INFO: Getting external IP address for bootstrap-e2e-minion-group-prl8 Jan 29 22:07:11.014: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-prl8(35.197.11.253:22) Jan 29 22:07:11.014: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0h23": Phase="Running", Reason="", readiness=true. Elapsed: 47.907218ms Jan 29 22:07:11.014: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0h23" satisfied condition "running and ready, or succeeded" Jan 29 22:07:11.014: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23 metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0] Jan 29 22:07:11.014: INFO: Getting external IP address for bootstrap-e2e-minion-group-0h23 Jan 29 22:07:11.014: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I OUTPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D OUTPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-outbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-0h23(35.247.69.167:22) Jan 29 22:07:11.570: INFO: ssh prow@34.82.19.122:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 22:07:11.570: INFO: ssh prow@34.82.19.122:22: stdout: "" Jan 29 22:07:11.570: INFO: ssh prow@34.82.19.122:22: stderr: "" Jan 29 22:07:11.570: INFO: ssh prow@34.82.19.122:22: exit code: 0 Jan 29 22:07:11.570: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-qp90 condition Ready to be false Jan 29 22:07:11.585: INFO: ssh prow@35.197.11.253:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 22:07:11.585: INFO: ssh prow@35.197.11.253:22: stdout: "" Jan 29 22:07:11.585: INFO: ssh prow@35.197.11.253:22: stderr: "" Jan 29 22:07:11.585: INFO: ssh prow@35.197.11.253:22: exit code: 0 Jan 29 22:07:11.585: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-prl8 condition Ready to be false Jan 29 22:07:11.587: INFO: ssh prow@35.247.69.167:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I OUTPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D OUTPUT -j DROP && break; done while true; do sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-outbound.log 2>&1 & Jan 29 22:07:11.587: INFO: ssh prow@35.247.69.167:22: stdout: "" Jan 29 22:07:11.587: INFO: ssh prow@35.247.69.167:22: stderr: "" Jan 29 22:07:11.587: INFO: ssh prow@35.247.69.167:22: exit code: 0 Jan 29 22:07:11.587: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-0h23 condition Ready to be false Jan 29 22:07:11.629: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:11.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:11.649: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:13.671: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:13.692: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:13.692: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:15.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:15.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:15.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:17.757: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:17.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:17.782: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:19.801: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:19.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:19.825: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:21.854: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:21.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:21.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:23.898: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:23.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:23.913: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:25.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:25.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:25.957: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:27.988: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:28.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:28.000: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:30.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:30.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:30.044: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:32.076: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:32.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:32.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:07:34.116: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:34.127: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:34.127: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:36.156: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:36.167: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:36.167: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:38.197: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:38.207: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:38.207: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:40.237: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:40.248: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:40.248: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:42.278: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:42.288: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:42.288: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:44.319: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:44.328: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:44.328: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:46.359: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:46.368: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:46.368: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:48.400: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:48.408: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:48.408: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:50.440: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:50.448: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:50.448: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:52.481: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:52.488: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:52.488: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:54.521: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:54.528: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:54.528: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:56.561: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:56.568: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:56.568: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:07:58.600: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:07:58.608: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:07:58.608: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:00.641: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:00.648: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:00.648: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:02.682: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:02.688: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:02.688: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:04.721: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:04.727: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:04.728: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 W0129 22:08:13.151926 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:13.151985 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:13.242: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:13.242: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:13.242: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 W0129 22:08:13.253228 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-prl8&resourceVersion=1712": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:13.253305 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-prl8&resourceVersion=1712": dial tcp 34.82.220.45:443: connect: connection refused W0129 22:08:13.603049 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:13.603098 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused W0129 22:08:15.000628 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:15.000687 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:15.282: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:15.282: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:15.282: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 W0129 22:08:15.811458 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-prl8&resourceVersion=1712": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:15.811507 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-prl8&resourceVersion=1712": dial tcp 34.82.220.45:443: connect: connection refused W0129 22:08:16.646816 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:16.646921 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:17.322: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:17.322: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:17.322: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 W0129 22:08:18.495865 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:18.495914 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:19.362: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:19.362: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:19.362: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 W0129 22:08:20.052290 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:20.052352 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:21.402: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:21.402: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:21.402: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 W0129 22:08:21.838532 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-prl8&resourceVersion=1712": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:21.838591 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-prl8&resourceVersion=1712": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:23.443: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:23.443: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:23.443: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:25.483: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:25.483: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:25.483: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:27.523: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:27.523: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:27.523: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:08:29.564: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:08:29.564: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:08:29.564: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 W0129 22:08:30.232492 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:30.232542 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-qp90&resourceVersion=1750": dial tcp 34.82.220.45:443: connect: connection refused W0129 22:08:30.613755 8097 reflector.go:483] test/utils/pod_store.go:57: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused E0129 22:08:30.613886 8097 reflector.go:141] test/utils/pod_store.go:57: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://34.82.220.45/api/v1/namespaces/kube-system/pods?fieldSelector=spec.nodeName%3Dbootstrap-e2e-minion-group-0h23&resourceVersion=1721": dial tcp 34.82.220.45:443: connect: connection refused Jan 29 22:08:35.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:35.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:35.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:37.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:37.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:37.922: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:39.976: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:39.976: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:39.977: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:42.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:42.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:42.024: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:44.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:44.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:44.073: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:46.119: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:46.119: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:46.119: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:48.164: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:48.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:48.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:50.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:50.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:50.212: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:52.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:52.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:52.258: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:54.305: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:54.305: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:54.305: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:56.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:56.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:56.351: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:58.504: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:58.504: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:08:58.504: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:00.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:00.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:00.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:02.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:02.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:02.662: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:04.705: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:04.709: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:04.709: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:06.748: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:06.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:06.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:08.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:08.799: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:08.799: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:10.835: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:10.848: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:10.848: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:09:12.835: INFO: Node bootstrap-e2e-minion-group-qp90 didn't reach desired Ready condition status (false) within 2m0s Jan 29 22:09:12.849: INFO: Node bootstrap-e2e-minion-group-0h23 didn't reach desired Ready condition status (false) within 2m0s Jan 29 22:09:12.849: INFO: Node bootstrap-e2e-minion-group-prl8 didn't reach desired Ready condition status (false) within 2m0s Jan 29 22:09:12.849: INFO: Node bootstrap-e2e-minion-group-0h23 failed reboot test. Jan 29 22:09:12.849: INFO: Node bootstrap-e2e-minion-group-prl8 failed reboot test. Jan 29 22:09:12.849: INFO: Node bootstrap-e2e-minion-group-qp90 failed reboot test. Jan 29 22:09:12.849: INFO: Executing termination hook on nodes Jan 29 22:09:12.849: INFO: Getting external IP address for bootstrap-e2e-minion-group-0h23 Jan 29 22:09:12.849: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-0h23(35.247.69.167:22) Jan 29 22:09:28.764: INFO: ssh prow@35.247.69.167:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 22:09:28.764: INFO: ssh prow@35.247.69.167:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 22:07:21 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 22:09:28.764: INFO: ssh prow@35.247.69.167:22: stderr: "" Jan 29 22:09:28.764: INFO: ssh prow@35.247.69.167:22: exit code: 0 Jan 29 22:09:28.764: INFO: Getting external IP address for bootstrap-e2e-minion-group-prl8 Jan 29 22:09:28.764: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-prl8(35.197.11.253:22) Jan 29 22:09:29.292: INFO: ssh prow@35.197.11.253:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 22:09:29.292: INFO: ssh prow@35.197.11.253:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 22:07:21 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 22:09:29.292: INFO: ssh prow@35.197.11.253:22: stderr: "" Jan 29 22:09:29.292: INFO: ssh prow@35.197.11.253:22: exit code: 0 Jan 29 22:09:29.292: INFO: Getting external IP address for bootstrap-e2e-minion-group-qp90 Jan 29 22:09:29.292: INFO: SSH "cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log" on bootstrap-e2e-minion-group-qp90(34.82.19.122:22) Jan 29 22:09:29.813: INFO: ssh prow@34.82.19.122:22: command: cat /tmp/drop-outbound.log && rm /tmp/drop-outbound.log Jan 29 22:09:29.813: INFO: ssh prow@34.82.19.122:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I OUTPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I OUTPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 22:07:21 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D OUTPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D OUTPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 22:09:29.813: INFO: ssh prow@34.82.19.122:22: stderr: "" Jan 29 22:09:29.813: INFO: ssh prow@34.82.19.122:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 22:09:29.813 < Exit [It] each node by dropping all outbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:144 @ 01/29/23 22:09:29.813 (2m19.061s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 22:09:29.813 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 22:09:29.813 Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-67jtp to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.461832289s (2.461840828s including waiting) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.2:8181/ready": dial tcp 10.64.0.2:8181: connect: connection refused Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.22:8181/ready": dial tcp 10.64.0.22:8181: connect: connection refused Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.22:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-67jtp_kube-system(72ca1a62-bb47-4fdd-8565-8cdea1e5a00a) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-q6pbg to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-q6pbg_kube-system(ec9db715-1c3c-452f-a7b0-808a6256b618) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.26:8181/ready": dial tcp 10.64.0.26:8181: connect: connection refused Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.26:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-q6pbg_kube-system(ec9db715-1c3c-452f-a7b0-808a6256b618) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.29:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-67jtp Jan 29 22:09:29.864: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-q6pbg Jan 29 22:09:29.864: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 22:09:29.864: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 22:09:29.864: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 22:09:29.864: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 22:09:29.864: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 22:09:29.864: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 22:09:29.864: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 22:09:29.864: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 22:09:29.864: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 22:09:29.864: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 22:09:29.864: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 22:09:29.864: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_fd4b became leader Jan 29 22:09:29.864: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_55acf became leader Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-68c9g to bootstrap-e2e-minion-group-prl8 Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 657.613501ms (657.634978ms including waiting) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-c8fqq to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 956.296756ms (956.305606ms including waiting) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-c8fqq_kube-system(0836b571-aa7d-46e2-846d-c2ef4dcbfd76) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Liveness probe failed: Get "http://10.64.0.25:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-srg78 to bootstrap-e2e-minion-group-qp90 Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 679.018448ms (679.041957ms including waiting) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container konnectivity-agent Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-srg78_kube-system(e0557a1e-0314-4bfe-8bff-7b1532b1bc85) Jan 29 22:09:29.864: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "http://10.64.3.10:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-c8fqq Jan 29 22:09:29.864: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-srg78 Jan 29 22:09:29.864: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-68c9g Jan 29 22:09:29.864: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 22:09:29.864: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 22:09:29.864: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 22:09:29.864: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 22:09:29.864: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 22:09:29.864: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 22:09:29.864: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 22:09:29.864: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 22:09:29.864: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 22:09:29.864: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 22:09:29.864: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 22:09:29.864: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 22:09:29.864: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_17b47e1a-c3ff-42ad-b566-12beffed0288 became leader Jan 29 22:09:29.864: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a96406e5-1a2d-415b-8674-47808fdfe3fe became leader Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-8w5rj to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.575856713s (2.575872946s including waiting) Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container autoscaler Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-8w5rj_kube-system(7b9fb270-f42e-4c3d-9947-2b7804b28b97) Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-8w5rj Jan 29 22:09:29.864: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0h23_kube-system(a7d7c673a5678c3fd05bb8d81e613fd2) Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-prl8_kube-system(af7f7d5ac5e113eedfb5c13ec70c059c) Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container kube-proxy Jan 29 22:09:29.864: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qp90_kube-system(fdc7414ccaf4c7060bb3a896ee9c4fdc) Jan 29 22:09:29.864: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:09:29.864: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 22:09:29.864: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 22:09:29.864: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 22:09:29.864: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 22:09:29.864: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e5aa9ff1-292b-44e6-a72b-8735e76d222a became leader Jan 29 22:09:29.864: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_68b1b904-ad42-431c-80bb-86195fbcd230 became leader Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-br722 to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.254994621s (1.255003973s including waiting) Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container default-http-backend Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container default-http-backend Jan 29 22:09:29.864: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-br722 Jan 29 22:09:29.864: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 22:09:29.864: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 22:09:29.864: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 22:09:29.864: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 22:09:29.864: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 22:09:29.864: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 22:09:29.864: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-7h8xr to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 728.14263ms (728.154201ms including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.813378152s (1.81340007s including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-gjgkr to bootstrap-e2e-minion-group-prl8 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 725.023258ms (725.04726ms including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.833322514s (1.833331253s including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-n78nd to bootstrap-e2e-minion-group-qp90 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 789.594528ms (789.609762ms including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.896285117s (1.896293813s including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-phrn6 to bootstrap-e2e-master Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 638.236648ms (638.252765ms including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.561997891s (1.56200326s including waiting) Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-7h8xr Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-phrn6 Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-gjgkr Jan 29 22:09:29.864: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-n78nd Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-858xc to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.198313689s (3.198321554s including waiting) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.812916392s (3.812924842s including waiting) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-858xc Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-858xc Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-qmbs6 to bootstrap-e2e-minion-group-qp90 Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.353709849s (1.353731831s including waiting) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.015217229s (1.01523164s including waiting) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": read tcp 10.64.3.1:36350->10.64.3.3:10250: read: connection reset by peer Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Failed: Error: failed to get sandbox container task: no running task found: task 4ac2767f3e99f3d72489c6f4ac8b5d5588d1b55aca1cdd3beefe33bfd1fb8c2e not found: not found Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server-nanny Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.8:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.8:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-qmbs6_kube-system(44703c8b-4289-449f-8dce-96f50d686272) Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-qmbs6 Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 22:09:29.864: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-0h23 Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.290617862s (2.290627616s including waiting) Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container volume-snapshot-controller Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container volume-snapshot-controller Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container volume-snapshot-controller Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(1b9daa28-15d1-49b3-a153-e62f36714b55) Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container volume-snapshot-controller Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container volume-snapshot-controller Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container volume-snapshot-controller Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(1b9daa28-15d1-49b3-a153-e62f36714b55) Jan 29 22:09:29.864: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 22:09:29.864 (51ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 22:09:29.864 Jan 29 22:09:29.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 22:09:29.91 (46ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 22:09:29.91 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 22:09:29.91 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 22:09:29.91 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 22:09:29.91 STEP: Collecting events from namespace "reboot-4096". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 22:09:29.91 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 22:09:29.972 Jan 29 22:09:30.014: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 22:09:30.014: INFO: Jan 29 22:09:30.059: INFO: Logging node info for node bootstrap-e2e-master Jan 29 22:09:30.102: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master b2fbf9c6-a8ad-4945-a5e2-052805da66e2 1475 0 2023-01-29 22:00:49 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 22:01:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 22:01:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 22:06:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:15 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:15 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:15 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:06:15 +0000 UTC,LastTransitionTime:2023-01-29 22:00:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.220.45,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0fdb3cfe29f66637553465718381a2f8,SystemUUID:0fdb3cfe-29f6-6637-5534-65718381a2f8,BootID:6f3f19cb-1b2d-43f1-a98c-6f2c40560047,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:09:30.102: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 22:09:30.152: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 22:09:30.328: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container kube-apiserver ready: true, restart count 2 Jan 29 22:09:30.328: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 22:00:22 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container l7-lb-controller ready: false, restart count 4 Jan 29 22:09:30.328: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container etcd-container ready: true, restart count 1 Jan 29 22:09:30.328: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container konnectivity-server-container ready: true, restart count 0 Jan 29 22:09:30.328: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container kube-scheduler ready: false, restart count 2 Jan 29 22:09:30.328: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 22:00:22 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container kube-addon-manager ready: true, restart count 2 Jan 29 22:09:30.328: INFO: metadata-proxy-v0.1-phrn6 started at 2023-01-29 22:00:57 +0000 UTC (0+2 container statuses recorded) Jan 29 22:09:30.328: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 22:09:30.328: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 22:09:30.328: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container etcd-container ready: true, restart count 2 Jan 29 22:09:30.328: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.328: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 29 22:09:30.498: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 22:09:30.498: INFO: Logging node info for node bootstrap-e2e-minion-group-0h23 Jan 29 22:09:30.540: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0h23 4bc52c5d-d6ac-4b10-a791-0f46bb41bbe0 1611 0 2023-01-29 22:00:45 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0h23 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 22:06:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 22:06:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 22:06:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 22:06:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-0h23,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:57 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:00:54 +0000 UTC,LastTransitionTime:2023-01-29 22:00:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:06:58 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.69.167,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0h23.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0h23.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8b143884b0552b595cbcfc83ba2dba58,SystemUUID:8b143884-b055-2b59-5cbc-fc83ba2dba58,BootID:65064e71-361a-40cd-9ae4-21f18d6bad09,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:09:30.540: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0h23 Jan 29 22:09:30.585: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0h23 Jan 29 22:09:30.652: INFO: coredns-6846b5b5f-q6pbg started at 2023-01-29 22:01:02 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container coredns ready: true, restart count 4 Jan 29 22:09:30.652: INFO: kube-proxy-bootstrap-e2e-minion-group-0h23 started at 2023-01-29 22:00:45 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 22:09:30.652: INFO: l7-default-backend-8549d69d99-br722 started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container default-http-backend ready: true, restart count 2 Jan 29 22:09:30.652: INFO: coredns-6846b5b5f-67jtp started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container coredns ready: true, restart count 3 Jan 29 22:09:30.652: INFO: kube-dns-autoscaler-5f6455f985-8w5rj started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container autoscaler ready: true, restart count 5 Jan 29 22:09:30.652: INFO: volume-snapshot-controller-0 started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container volume-snapshot-controller ready: true, restart count 7 Jan 29 22:09:30.652: INFO: metadata-proxy-v0.1-7h8xr started at 2023-01-29 22:00:46 +0000 UTC (0+2 container statuses recorded) Jan 29 22:09:30.652: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 22:09:30.652: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 22:09:30.652: INFO: konnectivity-agent-c8fqq started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.652: INFO: Container konnectivity-agent ready: false, restart count 4 Jan 29 22:09:30.817: INFO: Latency metrics for node bootstrap-e2e-minion-group-0h23 Jan 29 22:09:30.817: INFO: Logging node info for node bootstrap-e2e-minion-group-prl8 Jan 29 22:09:30.859: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-prl8 dc1f933b-530d-4900-80bb-fdebf917515a 1632 0 2023-01-29 22:00:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-prl8 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 22:06:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 22:06:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 22:07:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 22:07:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-prl8,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:06:59 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:06:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:06:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:06:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:07:00 +0000 UTC,LastTransitionTime:2023-01-29 22:07:00 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.11.253,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-prl8.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-prl8.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e4ee97ed1426b2932671b760c0a7fcdd,SystemUUID:e4ee97ed-1426-b293-2671-b760c0a7fcdd,BootID:52efc0e9-9d9e-407b-ac2d-5f66c05ac932,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:09:30.859: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-prl8 Jan 29 22:09:30.908: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-prl8 Jan 29 22:09:30.969: INFO: konnectivity-agent-68c9g started at 2023-01-29 22:01:07 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.969: INFO: Container konnectivity-agent ready: false, restart count 4 Jan 29 22:09:30.969: INFO: kube-proxy-bootstrap-e2e-minion-group-prl8 started at 2023-01-29 22:00:50 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:30.969: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 22:09:30.969: INFO: metadata-proxy-v0.1-gjgkr started at 2023-01-29 22:00:51 +0000 UTC (0+2 container statuses recorded) Jan 29 22:09:30.969: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 22:09:30.969: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 22:09:31.131: INFO: Latency metrics for node bootstrap-e2e-minion-group-prl8 Jan 29 22:09:31.131: INFO: Logging node info for node bootstrap-e2e-minion-group-qp90 Jan 29 22:09:31.173: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-qp90 6a45fc18-dedd-4084-96e2-e6ff57e70a04 1674 0 2023-01-29 22:00:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-qp90 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 22:06:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 22:07:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 22:07:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 22:07:05 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-qp90,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:07:04 +0000 UTC,LastTransitionTime:2023-01-29 22:07:03 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:05 +0000 UTC,LastTransitionTime:2023-01-29 22:07:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:05 +0000 UTC,LastTransitionTime:2023-01-29 22:07:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:07:05 +0000 UTC,LastTransitionTime:2023-01-29 22:07:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:07:05 +0000 UTC,LastTransitionTime:2023-01-29 22:07:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.82.19.122,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-qp90.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-qp90.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f468cde0139c49621ce815c9f02c0393,SystemUUID:f468cde0-139c-4962-1ce8-15c9f02c0393,BootID:632f7d0e-dfe9-46a1-91f4-d61d6e33f868,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:09:31.173: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-qp90 Jan 29 22:09:31.217: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-qp90 Jan 29 22:09:31.290: INFO: kube-proxy-bootstrap-e2e-minion-group-qp90 started at 2023-01-29 22:00:52 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:31.290: INFO: Container kube-proxy ready: true, restart count 3 Jan 29 22:09:31.290: INFO: metadata-proxy-v0.1-n78nd started at 2023-01-29 22:00:52 +0000 UTC (0+2 container statuses recorded) Jan 29 22:09:31.290: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 22:09:31.290: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 22:09:31.290: INFO: konnectivity-agent-srg78 started at 2023-01-29 22:01:07 +0000 UTC (0+1 container statuses recorded) Jan 29 22:09:31.290: INFO: Container konnectivity-agent ready: true, restart count 4 Jan 29 22:09:31.290: INFO: metrics-server-v0.5.2-867b8754b9-qmbs6 started at 2023-01-29 22:01:18 +0000 UTC (0+2 container statuses recorded) Jan 29 22:09:31.290: INFO: Container metrics-server ready: false, restart count 6 Jan 29 22:09:31.290: INFO: Container metrics-server-nanny ready: false, restart count 5 Jan 29 22:09:34.694: INFO: Latency metrics for node bootstrap-e2e-minion-group-qp90 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 22:09:34.694 (4.783s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 22:09:34.694 (4.784s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 22:09:34.694 STEP: Destroying namespace "reboot-4096" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 22:09:34.694 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 22:09:34.736 (43ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 22:09:34.736 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 22:09:34.736 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\striggering\skernel\spanic\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 22:15:25.383from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 22:11:49.544 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 22:11:49.544 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 22:11:49.544 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 22:11:49.544 Jan 29 22:11:49.544: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 22:11:49.545 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 22:11:53.023 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 22:11:53.103 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 22:11:53.184 (3.64s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 22:11:53.184 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 22:11:53.184 (0s) > Enter [It] each node by triggering kernel panic and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:109 @ 01/29/23 22:11:53.184 Jan 29 22:11:53.277: INFO: Getting bootstrap-e2e-minion-group-prl8 Jan 29 22:11:53.278: INFO: Getting bootstrap-e2e-minion-group-qp90 Jan 29 22:11:53.278: INFO: Getting bootstrap-e2e-minion-group-0h23 Jan 29 22:11:53.338: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-qp90 condition Ready to be true Jan 29 22:11:53.338: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-prl8 condition Ready to be true Jan 29 22:11:53.338: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-0h23 condition Ready to be true Jan 29 22:11:53.383: INFO: Node bootstrap-e2e-minion-group-qp90 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:11:53.383: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:11:53.383: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-n78nd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.383: INFO: Node bootstrap-e2e-minion-group-prl8 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:11:53.383: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:11:53.383: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-gjgkr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.383: INFO: Node bootstrap-e2e-minion-group-0h23 has 4 assigned pods with no liveness probes: [metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23] Jan 29 22:11:53.383: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23] Jan 29 22:11:53.383: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-0h23" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.384: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qp90" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.384: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-prl8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.384: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-7h8xr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.384: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.384: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-8w5rj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.427: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=true. Elapsed: 43.665742ms Jan 29 22:11:53.427: INFO: Pod "metadata-proxy-v0.1-n78nd" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.428: INFO: Pod "kube-dns-autoscaler-5f6455f985-8w5rj": Phase="Running", Reason="", readiness=true. Elapsed: 44.618835ms Jan 29 22:11:53.429: INFO: Pod "kube-dns-autoscaler-5f6455f985-8w5rj" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.430: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.262057ms Jan 29 22:11:53.430: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:11:53.431: INFO: Pod "metadata-proxy-v0.1-7h8xr": Phase="Running", Reason="", readiness=true. Elapsed: 47.523655ms Jan 29 22:11:53.431: INFO: Pod "metadata-proxy-v0.1-7h8xr" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.431: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8": Phase="Running", Reason="", readiness=true. Elapsed: 47.682032ms Jan 29 22:11:53.431: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.431: INFO: Pod "metadata-proxy-v0.1-gjgkr": Phase="Running", Reason="", readiness=true. Elapsed: 48.166646ms Jan 29 22:11:53.431: INFO: Pod "metadata-proxy-v0.1-gjgkr" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.431: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:11:53.431: INFO: Getting external IP address for bootstrap-e2e-minion-group-prl8 Jan 29 22:11:53.431: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-prl8(35.197.11.253:22) Jan 29 22:11:53.432: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=true. Elapsed: 48.262495ms Jan 29 22:11:53.432: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.432: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:11:53.432: INFO: Getting external IP address for bootstrap-e2e-minion-group-qp90 Jan 29 22:11:53.432: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-qp90(34.82.19.122:22) Jan 29 22:11:53.432: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0h23": Phase="Running", Reason="", readiness=true. Elapsed: 48.598549ms Jan 29 22:11:53.432: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0h23" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.950: INFO: ssh prow@35.197.11.253:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 22:11:53.950: INFO: ssh prow@35.197.11.253:22: stdout: "" Jan 29 22:11:53.950: INFO: ssh prow@35.197.11.253:22: stderr: "" Jan 29 22:11:53.950: INFO: ssh prow@35.197.11.253:22: exit code: 0 Jan 29 22:11:53.950: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-prl8 condition Ready to be false Jan 29 22:11:53.951: INFO: ssh prow@34.82.19.122:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 22:11:53.951: INFO: ssh prow@34.82.19.122:22: stdout: "" Jan 29 22:11:53.951: INFO: ssh prow@34.82.19.122:22: stderr: "" Jan 29 22:11:53.951: INFO: ssh prow@34.82.19.122:22: exit code: 0 Jan 29 22:11:53.951: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-qp90 condition Ready to be false Jan 29 22:11:53.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:53.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:55.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.088315574s Jan 29 22:11:55.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:11:56.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:56.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:57.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.088916315s Jan 29 22:11:57.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:11:58.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:58.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:59.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.087889706s Jan 29 22:11:59.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:00.134: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:00.134: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:01.490: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.106442473s Jan 29 22:12:01.490: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:02.178: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:02.178: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:03.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.088240889s Jan 29 22:12:03.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:04.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:04.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:05.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.088768761s Jan 29 22:12:05.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:06.275: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:06.275: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:07.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.088905636s Jan 29 22:12:07.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:08.319: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:08.319: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:09.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.08814026s Jan 29 22:12:09.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:10.362: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:10.362: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:11.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.08804939s Jan 29 22:12:11.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:12.405: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:12.405: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:13.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.088212979s Jan 29 22:12:13.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:14.449: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:14.449: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:15.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.088986525s Jan 29 22:12:15.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:16.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:16.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:17.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.088843713s Jan 29 22:12:17.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:18.536: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:18.536: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:19.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.088098001s Jan 29 22:12:19.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:20.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:20.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:21.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.087898127s Jan 29 22:12:21.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:22.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:22.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:23.471: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.087672404s Jan 29 22:12:23.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:24.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:24.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:25.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.08872282s Jan 29 22:12:25.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:26.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:26.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:27.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.088634212s Jan 29 22:12:27.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:28.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:28.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:29.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.088684469s Jan 29 22:12:29.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:30.798: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:30.798: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:31.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.088280515s Jan 29 22:12:31.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:32.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:32.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:33.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.088295429s Jan 29 22:12:33.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:34.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:34.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:35.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.088587838s Jan 29 22:12:35.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:36.932: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:36.932: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:37.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.088744094s Jan 29 22:12:37.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:38.975: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-qp90 condition Ready to be true Jan 29 22:12:38.976: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:39.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:12:39.471: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.087430428s Jan 29 22:12:39.471: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:41.018: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:41.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:12:41.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.087855554s Jan 29 22:12:41.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:43.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:43.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:43.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.088084915s Jan 29 22:12:43.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:45.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:45.160: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:45.474: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.089713066s Jan 29 22:12:45.474: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:47.147: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:47.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:47.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.088440996s Jan 29 22:12:47.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:49.189: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-prl8 condition Ready to be true Jan 29 22:12:49.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:12:49.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:49.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.088153257s Jan 29 22:12:49.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:51.274: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:12:51.288: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:51.476: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.091996321s Jan 29 22:12:51.476: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:53.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:12:53.330: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:53.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.087828383s Jan 29 22:12:53.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:55.360: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:12:55.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:55.471: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.087433586s Jan 29 22:12:55.471: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:57.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:12:57.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:57.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.087906454s Jan 29 22:12:57.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:59.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:12:59.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:59.471: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.087442901s Jan 29 22:12:59.471: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:13:01.471: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 1m8.08763095s Jan 29 22:13:01.471: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 22:13:01.472: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23] Jan 29 22:13:01.472: INFO: Getting external IP address for bootstrap-e2e-minion-group-0h23 Jan 29 22:13:01.472: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-0h23(35.247.69.167:22) Jan 29 22:13:01.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:13:01.502: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:13:01.994: INFO: ssh prow@35.247.69.167:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 22:13:01.994: INFO: ssh prow@35.247.69.167:22: stdout: "" Jan 29 22:13:01.994: INFO: ssh prow@35.247.69.167:22: stderr: "" Jan 29 22:13:01.994: INFO: ssh prow@35.247.69.167:22: exit code: 0 Jan 29 22:13:01.994: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-0h23 condition Ready to be false Jan 29 22:13:02.036: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:13:03.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:13:03.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:13:04.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:13:05.572: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:13:05.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:13:06.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:13:07.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:13:07.633: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:13:08.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:13:09.654: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:09.673: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:10.205: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:11.694: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:11.713: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:12.246: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:13.735: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:13.753: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:14.286: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:15.775: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:15.793: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:16.326: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:17.816: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:17.833: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:18.367: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:19.857: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:19.874: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:20.407: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:21.897: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:21.914: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:22.447: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:23.936: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:23.954: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:24.487: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:25.977: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:25.993: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:26.527: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:28.017: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:28.034: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:28.567: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:30.058: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:30.074: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:30.607: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:32.098: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:32.114: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:32.647: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:34.138: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:34.154: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:34.687: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:36.178: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:36.193: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:36.728: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:38.219: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:38.233: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:38.768: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:40.259: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:40.273: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:40.808: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:42.299: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:42.314: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:42.848: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:44.339: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:44.354: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:44.888: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:46.379: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:46.394: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:46.929: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:48.420: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:48.434: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:48.969: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:50.460: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:50.474: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:51.009: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:52.500: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:52.513: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:53.049: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:54.540: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:54.553: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:55.090: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:56.580: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:56.593: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:57.130: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:58.620: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:58.633: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:59.170: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:00.660: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:00.673: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:01.211: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:02.699: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:02.713: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:03.251: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:04.739: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:04.753: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:05.291: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:06.780: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:06.793: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:07.331: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:08.821: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:08.834: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:09.371: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:10.861: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:10.874: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:11.411: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:12.900: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:12.913: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:13.451: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:14.940: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:14.953: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:15.491: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:16.980: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:16.993: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:17.531: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:19.020: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:19.033: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:19.571: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:21.060: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:21.073: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:21.611: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:23.101: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:23.113: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:23.651: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:25.140: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:25.153: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:25.691: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:27.181: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:27.192: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:32.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:32.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:32.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:34.104: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:34.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:34.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:36.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:36.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:36.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:38.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:38.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:38.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:40.246: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:40.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:40.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:42.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:42.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:42.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:44.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:44.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:44.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:46.387: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:46.387: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:46.388: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:48.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:48.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:48.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:50.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:50.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:50.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:52.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:52.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:52.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:54.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:54.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:54.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:56.624: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:56.624: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:56.626: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:58.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:58.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:58.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:00.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:00.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:15:00.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:02.743: INFO: Node bootstrap-e2e-minion-group-0h23 didn't reach desired Ready condition status (false) within 2m0s Jan 29 22:15:02.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:02.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:04.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:04.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:06.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:06.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:08.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:08.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:10.971: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:10.971: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:13.017: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:13.017: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:15.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:15.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:17.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:17.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:19.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 22:15:18 +0000 UTC} {node.kubernetes.io/not-ready NoSchedule 2023-01-29 22:15:18 +0000 UTC}]. Failure Jan 29 22:15:19.159: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:15:19.159: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-n78nd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:15:19.159: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qp90" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:15:19.202: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=false. Elapsed: 43.140158ms Jan 29 22:15:19.202: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=false. Elapsed: 43.117212ms Jan 29 22:15:19.202: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-n78nd' on 'bootstrap-e2e-minion-group-qp90' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:12:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:07:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC }] Jan 29 22:15:19.202: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-qp90' on 'bootstrap-e2e-minion-group-qp90' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:12:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:07:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC }] Jan 29 22:15:21.201: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 22:15:18 +0000 UTC}]. Failure Jan 29 22:15:21.245: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=false. Elapsed: 2.086449834s Jan 29 22:15:21.245: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=false. Elapsed: 2.086515872s Jan 29 22:15:21.245: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-n78nd' on 'bootstrap-e2e-minion-group-qp90' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:12:37 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:15:19 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC }] Jan 29 22:15:21.245: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-qp90' on 'bootstrap-e2e-minion-group-qp90' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:12:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:07:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC }] Jan 29 22:15:23.247: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=true. Elapsed: 4.088451323s Jan 29 22:15:23.247: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=true. Elapsed: 4.088415254s Jan 29 22:15:23.247: INFO: Pod "metadata-proxy-v0.1-n78nd" satisfied condition "running and ready, or succeeded" Jan 29 22:15:23.247: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90" satisfied condition "running and ready, or succeeded" Jan 29 22:15:23.247: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:15:23.247: INFO: Reboot successful on node bootstrap-e2e-minion-group-qp90 Jan 29 22:15:23.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 22:15:18 +0000 UTC}]. Failure Jan 29 22:15:25.296: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:15:25.296: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-gjgkr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:15:25.296: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-prl8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:15:25.381: INFO: Pod "metadata-proxy-v0.1-gjgkr": Phase="Running", Reason="", readiness=true. Elapsed: 84.823677ms Jan 29 22:15:25.381: INFO: Pod "metadata-proxy-v0.1-gjgkr" satisfied condition "running and ready, or succeeded" Jan 29 22:15:25.382: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8": Phase="Running", Reason="", readiness=true. Elapsed: 86.418418ms Jan 29 22:15:25.382: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8" satisfied condition "running and ready, or succeeded" Jan 29 22:15:25.382: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:15:25.382: INFO: Reboot successful on node bootstrap-e2e-minion-group-prl8 Jan 29 22:15:25.382: INFO: Node bootstrap-e2e-minion-group-0h23 failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 22:15:25.383 < Exit [It] each node by triggering kernel panic and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:109 @ 01/29/23 22:15:25.383 (3m32.199s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 22:15:25.383 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 22:15:25.383 Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-67jtp to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.461832289s (2.461840828s including waiting) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.2:8181/ready": dial tcp 10.64.0.2:8181: connect: connection refused Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.22:8181/ready": dial tcp 10.64.0.22:8181: connect: connection refused Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.22:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-67jtp_kube-system(72ca1a62-bb47-4fdd-8565-8cdea1e5a00a) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.28:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-q6pbg to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-q6pbg_kube-system(ec9db715-1c3c-452f-a7b0-808a6256b618) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.26:8181/ready": dial tcp 10.64.0.26:8181: connect: connection refused Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.26:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-q6pbg_kube-system(ec9db715-1c3c-452f-a7b0-808a6256b618) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.29:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-67jtp Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-q6pbg Jan 29 22:15:25.460: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 22:15:25.460: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 22:15:25.460: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 22:15:25.460: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 22:15:25.460: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 22:15:25.460: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 22:15:25.460: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 22:15:25.460: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 22:15:25.460: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 22:15:25.460: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 22:15:25.460: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 22:15:25.460: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 22:15:25.460: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_fd4b became leader Jan 29 22:15:25.460: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_55acf became leader Jan 29 22:15:25.460: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ad28a became leader Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-68c9g to bootstrap-e2e-minion-group-prl8 Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 657.613501ms (657.634978ms including waiting) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-c8fqq to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 956.296756ms (956.305606ms including waiting) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-c8fqq_kube-system(0836b571-aa7d-46e2-846d-c2ef4dcbfd76) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Liveness probe failed: Get "http://10.64.0.25:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Failed: Error: failed to get sandbox container task: no running task found: task b2c0d64625e18667eee1d0a95e38a58d19d52df858184ed33ed54f65ddc2f556 not found: not found Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-c8fqq_kube-system(0836b571-aa7d-46e2-846d-c2ef4dcbfd76) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-srg78 to bootstrap-e2e-minion-group-qp90 Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 679.018448ms (679.041957ms including waiting) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-srg78_kube-system(e0557a1e-0314-4bfe-8bff-7b1532b1bc85) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "http://10.64.3.10:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-c8fqq Jan 29 22:15:25.460: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-srg78 Jan 29 22:15:25.460: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-68c9g Jan 29 22:15:25.460: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 22:15:25.460: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 22:15:25.460: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 22:15:25.460: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 22:15:25.460: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 22:15:25.460: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 22:15:25.460: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 22:15:25.460: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 22:15:25.460: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 22:15:25.460: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 22:15:25.460: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 22:15:25.460: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 22:15:25.460: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused Jan 29 22:15:25.460: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_17b47e1a-c3ff-42ad-b566-12beffed0288 became leader Jan 29 22:15:25.460: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a96406e5-1a2d-415b-8674-47808fdfe3fe became leader Jan 29 22:15:25.460: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_12be7f8d-96f2-4959-9cf6-ed72d48a5404 became leader Jan 29 22:15:25.460: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_bdf47f3d-8a3a-42dc-96dc-92193f43c416 became leader Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-8w5rj to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.575856713s (2.575872946s including waiting) Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-8w5rj_kube-system(7b9fb270-f42e-4c3d-9947-2b7804b28b97) Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-8w5rj_kube-system(7b9fb270-f42e-4c3d-9947-2b7804b28b97) Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-8w5rj Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 22:15:25.460: INFO: event for kube-dns: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0h23_kube-system(a7d7c673a5678c3fd05bb8d81e613fd2) Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0h23_kube-system(a7d7c673a5678c3fd05bb8d81e613fd2) Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-prl8_kube-system(af7f7d5ac5e113eedfb5c13ec70c059c) Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-prl8_kube-system(af7f7d5ac5e113eedfb5c13ec70c059c) Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qp90_kube-system(fdc7414ccaf4c7060bb3a896ee9c4fdc) Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 22:15:25.460: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 22:15:25.460: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 22:15:25.460: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 22:15:25.460: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e5aa9ff1-292b-44e6-a72b-8735e76d222a became leader Jan 29 22:15:25.460: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_68b1b904-ad42-431c-80bb-86195fbcd230 became leader Jan 29 22:15:25.460: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_65313fb6-cd85-4780-9c60-766a799fefea became leader Jan 29 22:15:25.460: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4b1c330c-d507-49e9-bb07-682f604268de became leader Jan 29 22:15:25.460: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_add53644-d297-46e8-a997-cbf4dbb45277 became leader Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-br722 to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.254994621s (1.255003973s including waiting) Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container default-http-backend Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container default-http-backend Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Liveness probe failed: Get "http://10.64.0.23:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-br722 Jan 29 22:15:25.460: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 22:15:25.460: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 22:15:25.460: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 22:15:25.460: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 22:15:25.460: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 22:15:25.460: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 22:15:25.460: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-7h8xr to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 728.14263ms (728.154201ms including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.813378152s (1.81340007s including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-gjgkr to bootstrap-e2e-minion-group-prl8 Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 725.023258ms (725.04726ms including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.833322514s (1.833331253s including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-n78nd to bootstrap-e2e-minion-group-qp90 Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 789.594528ms (789.609762ms including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.896285117s (1.896293813s including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-phrn6 to bootstrap-e2e-master Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 638.236648ms (638.252765ms including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.561997891s (1.56200326s including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-7h8xr Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-phrn6 Jan 29 22:15:25.461: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-gjgkr Jan 29 22:15:25.461: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-n78nd Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-858xc to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 3.198313689s (3.198321554s including waiting) Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metrics-server Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metrics-server Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 3.812916392s (3.812924842s including waiting) Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metrics-server-nanny Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metrics-server-nanny Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container metrics-server Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container metrics-server-nanny Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c-858xc: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-858xc Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-858xc Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-qmbs6 to bootstrap-e2e-minion-group-qp90 Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.353709849s (1.353731831s including waiting) Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.015217229s (1.01523164s including waiting) Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server-nanny Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server-nanny Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server-nanny Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": read tcp 10.64.3.1:36350->10.64.3.3:10250: read: connection reset by peer Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Container metrics-server failed liveness probe, will be restarted Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Failed: Error: failed to get sandbox container task: no running task found: task 4ac2767f3e99f3d72489c6f4ac8b5d5588d1b55aca1cdd3beefe33bfd1fb8c2e not found: not found Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server-nanny Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metrics-server-nanny Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Readiness probe failed: Get "https://10.64.3.8:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "https://10.64.3.8:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-qmbs6_kube-system(44703c8b-4289-449f-8dce-96f50d686272) Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server-nanny Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container metrics-server Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/metrics-server-v0.5.2-867b8754b9-qmbs6 Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9-qmbs6: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metrics-server Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-qmbs6 Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 22:15:25.461: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.290617862s (2.290627616s including waiting) Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container volume-snapshot-controller Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container volume-snapshot-controller Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container volume-snapshot-controller Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(1b9daa28-15d1-49b3-a153-e62f36714b55) Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container volume-snapshot-controller Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container volume-snapshot-controller Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container volume-snapshot-controller Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(1b9daa28-15d1-49b3-a153-e62f36714b55) Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container volume-snapshot-controller Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container volume-snapshot-controller Jan 29 22:15:25.461: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 22:15:25.461 (78ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 22:15:25.461 Jan 29 22:15:25.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 22:15:25.507 (46ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 22:15:25.507 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 22:15:25.507 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 22:15:25.507 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 22:15:25.507 STEP: Collecting events from namespace "reboot-1364". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 22:15:25.507 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 22:15:25.55 Jan 29 22:15:25.596: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 22:15:25.596: INFO: Jan 29 22:15:25.642: INFO: Logging node info for node bootstrap-e2e-master Jan 29 22:15:25.695: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master b2fbf9c6-a8ad-4945-a5e2-052805da66e2 1981 0 2023-01-29 22:00:49 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:49 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 22:01:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2023-01-29 22:01:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2023-01-29 22:11:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:19 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:19 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:11:19 +0000 UTC,LastTransitionTime:2023-01-29 22:00:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:11:19 +0000 UTC,LastTransitionTime:2023-01-29 22:00:57 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.82.220.45,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:0fdb3cfe29f66637553465718381a2f8,SystemUUID:0fdb3cfe-29f6-6637-5534-65718381a2f8,BootID:6f3f19cb-1b2d-43f1-a98c-6f2c40560047,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:15:25.696: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 22:15:25.758: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 22:15:25.897: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:25.897: INFO: Container etcd-container ready: true, restart count 3 Jan 29 22:15:25.897: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:25.897: INFO: Container kube-controller-manager ready: true, restart count 5 Jan 29 22:15:25.897: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:25.897: INFO: Container kube-scheduler ready: true, restart count 5 Jan 29 22:15:25.897: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 22:00:22 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:25.897: INFO: Container kube-addon-manager ready: true, restart count 2 Jan 29 22:15:25.897: INFO: metadata-proxy-v0.1-phrn6 started at 2023-01-29 22:00:57 +0000 UTC (0+2 container statuses recorded) Jan 29 22:15:25.897: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 22:15:25.897: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 22:15:25.897: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:25.897: INFO: Container etcd-container ready: true, restart count 2 Jan 29 22:15:25.897: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:25.897: INFO: Container konnectivity-server-container ready: true, restart count 2 Jan 29 22:15:25.897: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 22:00:06 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:25.897: INFO: Container kube-apiserver ready: true, restart count 3 Jan 29 22:15:25.897: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 22:00:22 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:25.897: INFO: Container l7-lb-controller ready: false, restart count 6 Jan 29 22:15:26.098: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 22:15:26.098: INFO: Logging node info for node bootstrap-e2e-minion-group-0h23 Jan 29 22:15:26.143: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-0h23 4bc52c5d-d6ac-4b10-a791-0f46bb41bbe0 2283 0 2023-01-29 22:00:45 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-0h23 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 22:06:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 22:14:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 22:15:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-29 22:15:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} }]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-0h23,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:00:54 +0000 UTC,LastTransitionTime:2023-01-29 22:00:54 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:15:18 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:15:18 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:15:18 +0000 UTC,LastTransitionTime:2023-01-29 22:06:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:15:18 +0000 UTC,LastTransitionTime:2023-01-29 22:15:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:35.247.69.167,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-0h23.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-0h23.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8b143884b0552b595cbcfc83ba2dba58,SystemUUID:8b143884-b055-2b59-5cbc-fc83ba2dba58,BootID:e95083ba-4d7e-457b-8e23-c36adf77eeba,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:15:26.143: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-0h23 Jan 29 22:15:26.198: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-0h23 Jan 29 22:15:26.354: INFO: metadata-proxy-v0.1-7h8xr started at 2023-01-29 22:00:46 +0000 UTC (0+2 container statuses recorded) Jan 29 22:15:26.354: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 22:15:26.354: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 22:15:26.354: INFO: konnectivity-agent-c8fqq started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:26.354: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 29 22:15:26.354: INFO: coredns-6846b5b5f-q6pbg started at 2023-01-29 22:01:02 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:26.354: INFO: Container coredns ready: false, restart count 7 Jan 29 22:15:26.354: INFO: kube-proxy-bootstrap-e2e-minion-group-0h23 started at 2023-01-29 22:00:45 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:26.354: INFO: Container kube-proxy ready: true, restart count 5 Jan 29 22:15:26.354: INFO: l7-default-backend-8549d69d99-br722 started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:26.354: INFO: Container default-http-backend ready: false, restart count 3 Jan 29 22:15:26.354: INFO: coredns-6846b5b5f-67jtp started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:26.354: INFO: Container coredns ready: false, restart count 5 Jan 29 22:15:26.354: INFO: kube-dns-autoscaler-5f6455f985-8w5rj started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:26.354: INFO: Container autoscaler ready: true, restart count 7 Jan 29 22:15:26.354: INFO: volume-snapshot-controller-0 started at 2023-01-29 22:00:54 +0000 UTC (0+1 container statuses recorded) Jan 29 22:15:26.354: INFO: Container volume-snapshot-controller ready: true, restart count 10 Jan 29 22:16:01.650: INFO: Latency metrics for node bootstrap-e2e-minion-group-0h23 Jan 29 22:16:01.650: INFO: Logging node info for node bootstrap-e2e-minion-group-prl8 Jan 29 22:16:01.692: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-prl8 dc1f933b-530d-4900-80bb-fdebf917515a 2340 0 2023-01-29 22:00:50 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-prl8 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 22:12:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 22:14:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 22:15:18 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-29 22:15:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} }]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-prl8,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815430144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553286144 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:15:18 +0000 UTC,LastTransitionTime:2023-01-29 22:15:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:15:18 +0000 UTC,LastTransitionTime:2023-01-29 22:15:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:15:18 +0000 UTC,LastTransitionTime:2023-01-29 22:15:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:15:18 +0000 UTC,LastTransitionTime:2023-01-29 22:15:18 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.197.11.253,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-prl8.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-prl8.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e4ee97ed1426b2932671b760c0a7fcdd,SystemUUID:e4ee97ed-1426-b293-2671-b760c0a7fcdd,BootID:3f7bd2bf-f0a0-4015-b387-094d6c12402a,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:16:01.692: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-prl8 Jan 29 22:16:01.738: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-prl8 Jan 29 22:16:01.806: INFO: konnectivity-agent-68c9g started at 2023-01-29 22:01:07 +0000 UTC (0+1 container statuses recorded) Jan 29 22:16:01.806: INFO: Container konnectivity-agent ready: true, restart count 7 Jan 29 22:16:01.806: INFO: kube-proxy-bootstrap-e2e-minion-group-prl8 started at 2023-01-29 22:00:50 +0000 UTC (0+1 container statuses recorded) Jan 29 22:16:01.806: INFO: Container kube-proxy ready: false, restart count 5 Jan 29 22:16:01.806: INFO: metadata-proxy-v0.1-gjgkr started at 2023-01-29 22:00:51 +0000 UTC (0+2 container statuses recorded) Jan 29 22:16:01.806: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 22:16:01.806: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 22:16:01.964: INFO: Latency metrics for node bootstrap-e2e-minion-group-prl8 Jan 29 22:16:01.964: INFO: Logging node info for node bootstrap-e2e-minion-group-qp90 Jan 29 22:16:02.006: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-qp90 6a45fc18-dedd-4084-96e2-e6ff57e70a04 2289 0 2023-01-29 22:00:51 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-qp90 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 22:00:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 22:12:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 22:14:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 22:15:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 22:15:19 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:memory":{}},"f:capacity":{"f:memory":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-ci-reboot/us-west1-b/bootstrap-e2e-minion-group-qp90,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 22:14:33 +0000 UTC,LastTransitionTime:2023-01-29 22:14:32 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 22:01:07 +0000 UTC,LastTransitionTime:2023-01-29 22:01:07 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 22:15:19 +0000 UTC,LastTransitionTime:2023-01-29 22:15:18 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 22:15:19 +0000 UTC,LastTransitionTime:2023-01-29 22:15:18 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 22:15:19 +0000 UTC,LastTransitionTime:2023-01-29 22:15:18 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 22:15:19 +0000 UTC,LastTransitionTime:2023-01-29 22:15:19 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.82.19.122,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-qp90.c.k8s-jkns-e2e-gce-ci-reboot.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-qp90.c.k8s-jkns-e2e-gce-ci-reboot.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f468cde0139c49621ce815c9f02c0393,SystemUUID:f468cde0-139c-4962-1ce8-15c9f02c0393,BootID:6199402a-4daa-4161-8dd8-6a4ef9acf8a5,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-9-g967979efd,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 22:16:02.006: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-qp90 Jan 29 22:16:02.052: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-qp90 Jan 29 22:16:02.156: INFO: kube-proxy-bootstrap-e2e-minion-group-qp90 started at 2023-01-29 22:00:52 +0000 UTC (0+1 container statuses recorded) Jan 29 22:16:02.156: INFO: Container kube-proxy ready: true, restart count 4 Jan 29 22:16:02.156: INFO: metadata-proxy-v0.1-n78nd started at 2023-01-29 22:00:52 +0000 UTC (0+2 container statuses recorded) Jan 29 22:16:02.156: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 22:16:02.156: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 22:16:02.156: INFO: konnectivity-agent-srg78 started at 2023-01-29 22:01:07 +0000 UTC (0+1 container statuses recorded) Jan 29 22:16:02.156: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 29 22:16:02.156: INFO: metrics-server-v0.5.2-867b8754b9-qmbs6 started at 2023-01-29 22:01:18 +0000 UTC (0+2 container statuses recorded) Jan 29 22:16:02.156: INFO: Container metrics-server ready: false, restart count 8 Jan 29 22:16:02.156: INFO: Container metrics-server-nanny ready: true, restart count 8 Jan 29 22:16:02.318: INFO: Latency metrics for node bootstrap-e2e-minion-group-qp90 END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 22:16:02.318 (36.811s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 22:16:02.318 (36.811s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 22:16:02.318 STEP: Destroying namespace "reboot-1364" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 22:16:02.318 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 22:16:02.361 (43ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 22:16:02.361 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 22:16:02.361 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\striggering\skernel\spanic\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 22:15:25.383
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 22:11:49.544 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 22:11:49.544 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 22:11:49.544 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 22:11:49.544 Jan 29 22:11:49.544: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 22:11:49.545 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 22:11:53.023 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 22:11:53.103 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 22:11:53.184 (3.64s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 22:11:53.184 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 22:11:53.184 (0s) > Enter [It] each node by triggering kernel panic and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:109 @ 01/29/23 22:11:53.184 Jan 29 22:11:53.277: INFO: Getting bootstrap-e2e-minion-group-prl8 Jan 29 22:11:53.278: INFO: Getting bootstrap-e2e-minion-group-qp90 Jan 29 22:11:53.278: INFO: Getting bootstrap-e2e-minion-group-0h23 Jan 29 22:11:53.338: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-qp90 condition Ready to be true Jan 29 22:11:53.338: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-prl8 condition Ready to be true Jan 29 22:11:53.338: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-0h23 condition Ready to be true Jan 29 22:11:53.383: INFO: Node bootstrap-e2e-minion-group-qp90 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:11:53.383: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:11:53.383: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-n78nd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.383: INFO: Node bootstrap-e2e-minion-group-prl8 has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:11:53.383: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:11:53.383: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-gjgkr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.383: INFO: Node bootstrap-e2e-minion-group-0h23 has 4 assigned pods with no liveness probes: [metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23] Jan 29 22:11:53.383: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23] Jan 29 22:11:53.383: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-0h23" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.384: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qp90" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.384: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-prl8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.384: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-7h8xr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.384: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.384: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-8w5rj" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:11:53.427: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=true. Elapsed: 43.665742ms Jan 29 22:11:53.427: INFO: Pod "metadata-proxy-v0.1-n78nd" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.428: INFO: Pod "kube-dns-autoscaler-5f6455f985-8w5rj": Phase="Running", Reason="", readiness=true. Elapsed: 44.618835ms Jan 29 22:11:53.429: INFO: Pod "kube-dns-autoscaler-5f6455f985-8w5rj" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.430: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.262057ms Jan 29 22:11:53.430: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:11:53.431: INFO: Pod "metadata-proxy-v0.1-7h8xr": Phase="Running", Reason="", readiness=true. Elapsed: 47.523655ms Jan 29 22:11:53.431: INFO: Pod "metadata-proxy-v0.1-7h8xr" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.431: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8": Phase="Running", Reason="", readiness=true. Elapsed: 47.682032ms Jan 29 22:11:53.431: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.431: INFO: Pod "metadata-proxy-v0.1-gjgkr": Phase="Running", Reason="", readiness=true. Elapsed: 48.166646ms Jan 29 22:11:53.431: INFO: Pod "metadata-proxy-v0.1-gjgkr" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.431: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:11:53.431: INFO: Getting external IP address for bootstrap-e2e-minion-group-prl8 Jan 29 22:11:53.431: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-prl8(35.197.11.253:22) Jan 29 22:11:53.432: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=true. Elapsed: 48.262495ms Jan 29 22:11:53.432: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.432: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:11:53.432: INFO: Getting external IP address for bootstrap-e2e-minion-group-qp90 Jan 29 22:11:53.432: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-qp90(34.82.19.122:22) Jan 29 22:11:53.432: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0h23": Phase="Running", Reason="", readiness=true. Elapsed: 48.598549ms Jan 29 22:11:53.432: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-0h23" satisfied condition "running and ready, or succeeded" Jan 29 22:11:53.950: INFO: ssh prow@35.197.11.253:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 22:11:53.950: INFO: ssh prow@35.197.11.253:22: stdout: "" Jan 29 22:11:53.950: INFO: ssh prow@35.197.11.253:22: stderr: "" Jan 29 22:11:53.950: INFO: ssh prow@35.197.11.253:22: exit code: 0 Jan 29 22:11:53.950: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-prl8 condition Ready to be false Jan 29 22:11:53.951: INFO: ssh prow@34.82.19.122:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 22:11:53.951: INFO: ssh prow@34.82.19.122:22: stdout: "" Jan 29 22:11:53.951: INFO: ssh prow@34.82.19.122:22: stderr: "" Jan 29 22:11:53.951: INFO: ssh prow@34.82.19.122:22: exit code: 0 Jan 29 22:11:53.951: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-qp90 condition Ready to be false Jan 29 22:11:53.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:53.995: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:55.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.088315574s Jan 29 22:11:55.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:11:56.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:56.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:57.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.088916315s Jan 29 22:11:57.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:11:58.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:58.087: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:11:59.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.087889706s Jan 29 22:11:59.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:00.134: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:00.134: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:01.490: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.106442473s Jan 29 22:12:01.490: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:02.178: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:02.178: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:03.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.088240889s Jan 29 22:12:03.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:04.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:04.230: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:05.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.088768761s Jan 29 22:12:05.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:06.275: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:06.275: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:07.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.088905636s Jan 29 22:12:07.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:08.319: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:08.319: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:09.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.08814026s Jan 29 22:12:09.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:10.362: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:10.362: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:11.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.08804939s Jan 29 22:12:11.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:12.405: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:12.405: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:13.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.088212979s Jan 29 22:12:13.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:14.449: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:14.449: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:15.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.088986525s Jan 29 22:12:15.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:16.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:16.492: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:17.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.088843713s Jan 29 22:12:17.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:18.536: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:18.536: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:19.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.088098001s Jan 29 22:12:19.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:20.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:20.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:21.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.087898127s Jan 29 22:12:21.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:22.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:22.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:23.471: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.087672404s Jan 29 22:12:23.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:24.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:24.667: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:25.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.08872282s Jan 29 22:12:25.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:26.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:26.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:27.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.088634212s Jan 29 22:12:27.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:28.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:28.754: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:29.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.088684469s Jan 29 22:12:29.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:30.798: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:30.798: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:31.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.088280515s Jan 29 22:12:31.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:32.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:32.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:33.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.088295429s Jan 29 22:12:33.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:34.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:34.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:35.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.088587838s Jan 29 22:12:35.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:36.932: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:36.932: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:37.473: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.088744094s Jan 29 22:12:37.473: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:38.975: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-qp90 condition Ready to be true Jan 29 22:12:38.976: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:39.032: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:12:39.471: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.087430428s Jan 29 22:12:39.471: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:41.018: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:41.074: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:12:41.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.087855554s Jan 29 22:12:41.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:43.061: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:43.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:43.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.088084915s Jan 29 22:12:43.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:45.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:45.160: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:45.474: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.089713066s Jan 29 22:12:45.474: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:47.147: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:12:47.202: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:47.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.088440996s Jan 29 22:12:47.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:49.189: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-prl8 condition Ready to be true Jan 29 22:12:49.232: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:12:49.245: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:49.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.088153257s Jan 29 22:12:49.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:51.274: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 22:12:51.288: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:51.476: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.091996321s Jan 29 22:12:51.476: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:53.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:12:53.330: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:53.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.087828383s Jan 29 22:12:53.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:55.360: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:12:55.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:55.471: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.087433586s Jan 29 22:12:55.471: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:57.402: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:12:57.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:57.472: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.087906454s Jan 29 22:12:57.472: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:12:59.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:12:59.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:12:59.471: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.087442901s Jan 29 22:12:59.471: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-0h23' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:11:40 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:54 +0000 UTC }] Jan 29 22:13:01.471: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 1m8.08763095s Jan 29 22:13:01.471: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 22:13:01.472: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [metadata-proxy-v0.1-7h8xr volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-8w5rj kube-proxy-bootstrap-e2e-minion-group-0h23] Jan 29 22:13:01.472: INFO: Getting external IP address for bootstrap-e2e-minion-group-0h23 Jan 29 22:13:01.472: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-0h23(35.247.69.167:22) Jan 29 22:13:01.487: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:13:01.502: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:13:01.994: INFO: ssh prow@35.247.69.167:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 22:13:01.994: INFO: ssh prow@35.247.69.167:22: stdout: "" Jan 29 22:13:01.994: INFO: ssh prow@35.247.69.167:22: stderr: "" Jan 29 22:13:01.994: INFO: ssh prow@35.247.69.167:22: exit code: 0 Jan 29 22:13:01.994: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-0h23 condition Ready to be false Jan 29 22:13:02.036: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:13:03.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:13:03.544: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:13:04.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:13:05.572: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:13:05.590: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:13:06.123: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:13:07.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:13:07.633: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:13:08.165: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:13:09.654: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:09.673: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:10.205: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:11.694: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:11.713: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:12.246: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:13.735: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:13.753: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:14.286: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:15.775: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:15.793: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:16.326: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:17.816: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:17.833: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:18.367: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:19.857: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:19.874: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:20.407: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:21.897: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:21.914: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:22.447: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:23.936: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:23.954: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:24.487: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:25.977: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:25.993: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:26.527: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:28.017: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:28.034: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:28.567: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:30.058: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:30.074: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:30.607: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:32.098: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:32.114: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:32.647: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:34.138: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:34.154: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:34.687: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:36.178: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:36.193: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:36.728: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:38.219: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:38.233: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:38.768: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:40.259: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:40.273: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:40.808: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:42.299: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:42.314: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:42.848: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:44.339: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:44.354: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:44.888: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:46.379: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:46.394: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:46.929: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:48.420: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:48.434: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:48.969: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:50.460: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:50.474: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:51.009: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:52.500: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:52.513: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:53.049: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:54.540: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:54.553: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:55.090: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:56.580: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:56.593: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:57.130: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:13:58.620: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:13:58.633: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:13:59.170: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:00.660: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:00.673: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:01.211: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:02.699: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:02.713: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:03.251: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:04.739: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:04.753: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:05.291: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:06.780: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:06.793: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:07.331: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:08.821: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:08.834: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:09.371: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:10.861: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:10.874: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:11.411: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:12.900: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:12.913: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:13.451: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:14.940: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:14.953: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:15.491: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:16.980: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:16.993: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:17.531: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:19.020: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:19.033: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:19.571: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:21.060: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:21.073: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:21.611: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:23.101: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:23.113: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:23.651: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:25.140: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:25.153: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:25.691: INFO: Couldn't get node bootstrap-e2e-minion-group-0h23 Jan 29 22:14:27.181: INFO: Couldn't get node bootstrap-e2e-minion-group-prl8 Jan 29 22:14:27.192: INFO: Couldn't get node bootstrap-e2e-minion-group-qp90 Jan 29 22:14:32.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:32.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:32.057: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:34.104: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:34.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:34.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:36.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:36.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:36.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:38.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:38.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:38.197: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:40.246: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:40.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:40.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:42.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:42.292: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:42.293: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:44.339: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:44.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:44.340: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:46.387: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:46.387: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:46.388: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:48.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:48.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:48.437: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:50.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:50.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:50.485: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:52.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:52.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:52.534: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:54.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:54.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:54.581: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:56.624: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:56.624: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:56.626: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:14:58.696: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:14:58.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:14:58.697: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:00.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:00.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-0h23 is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 22:15:00.743: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:02.743: INFO: Node bootstrap-e2e-minion-group-0h23 didn't reach desired Ready condition status (false) within 2m0s Jan 29 22:15:02.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:02.789: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:04.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:04.834: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:06.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:06.880: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:08.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:08.925: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:10.971: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:10.971: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:13.017: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:13.017: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:15.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:15.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:17.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:47 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:52 +0000 UTC}]. Failure Jan 29 22:15:17.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-qp90 is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 22:12:37 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 22:12:42 +0000 UTC}]. Failure Jan 29 22:15:19.159: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 22:15:18 +0000 UTC} {node.kubernetes.io/not-ready NoSchedule 2023-01-29 22:15:18 +0000 UTC}]. Failure Jan 29 22:15:19.159: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:15:19.159: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-n78nd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:15:19.159: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-qp90" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:15:19.202: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=false. Elapsed: 43.140158ms Jan 29 22:15:19.202: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=false. Elapsed: 43.117212ms Jan 29 22:15:19.202: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-n78nd' on 'bootstrap-e2e-minion-group-qp90' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:12:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:07:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC }] Jan 29 22:15:19.202: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-qp90' on 'bootstrap-e2e-minion-group-qp90' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:12:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:07:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC }] Jan 29 22:15:21.201: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 22:15:18 +0000 UTC}]. Failure Jan 29 22:15:21.245: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=false. Elapsed: 2.086449834s Jan 29 22:15:21.245: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=false. Elapsed: 2.086515872s Jan 29 22:15:21.245: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-n78nd' on 'bootstrap-e2e-minion-group-qp90' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:12:37 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:15:19 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC }] Jan 29 22:15:21.245: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-qp90' on 'bootstrap-e2e-minion-group-qp90' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:12:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:07:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 22:00:52 +0000 UTC }] Jan 29 22:15:23.247: INFO: Pod "metadata-proxy-v0.1-n78nd": Phase="Running", Reason="", readiness=true. Elapsed: 4.088451323s Jan 29 22:15:23.247: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90": Phase="Running", Reason="", readiness=true. Elapsed: 4.088415254s Jan 29 22:15:23.247: INFO: Pod "metadata-proxy-v0.1-n78nd" satisfied condition "running and ready, or succeeded" Jan 29 22:15:23.247: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-qp90" satisfied condition "running and ready, or succeeded" Jan 29 22:15:23.247: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-qp90 metadata-proxy-v0.1-n78nd] Jan 29 22:15:23.247: INFO: Reboot successful on node bootstrap-e2e-minion-group-qp90 Jan 29 22:15:23.247: INFO: Condition Ready of node bootstrap-e2e-minion-group-prl8 is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 22:15:18 +0000 UTC}]. Failure Jan 29 22:15:25.296: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:15:25.296: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-gjgkr" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:15:25.296: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-prl8" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 22:15:25.381: INFO: Pod "metadata-proxy-v0.1-gjgkr": Phase="Running", Reason="", readiness=true. Elapsed: 84.823677ms Jan 29 22:15:25.381: INFO: Pod "metadata-proxy-v0.1-gjgkr" satisfied condition "running and ready, or succeeded" Jan 29 22:15:25.382: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8": Phase="Running", Reason="", readiness=true. Elapsed: 86.418418ms Jan 29 22:15:25.382: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-prl8" satisfied condition "running and ready, or succeeded" Jan 29 22:15:25.382: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-prl8 metadata-proxy-v0.1-gjgkr] Jan 29 22:15:25.382: INFO: Reboot successful on node bootstrap-e2e-minion-group-prl8 Jan 29 22:15:25.382: INFO: Node bootstrap-e2e-minion-group-0h23 failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 22:15:25.383 < Exit [It] each node by triggering kernel panic and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:109 @ 01/29/23 22:15:25.383 (3m32.199s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 22:15:25.383 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 22:15:25.383 Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-67jtp to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 2.461832289s (2.461840828s including waiting) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.2:8181/ready": dial tcp 10.64.0.2:8181: connect: connection refused Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.22:8181/ready": dial tcp 10.64.0.22:8181: connect: connection refused Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.22:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-67jtp_kube-system(72ca1a62-bb47-4fdd-8565-8cdea1e5a00a) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.28:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-67jtp: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-q6pbg to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-q6pbg_kube-system(ec9db715-1c3c-452f-a7b0-808a6256b618) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.26:8181/ready": dial tcp 10.64.0.26:8181: connect: connection refused Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.26:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-q6pbg_kube-system(ec9db715-1c3c-452f-a7b0-808a6256b618) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Readiness probe failed: Get "http://10.64.0.29:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f-q6pbg: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container coredns Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-67jtp Jan 29 22:15:25.460: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-q6pbg Jan 29 22:15:25.460: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 22:15:25.460: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 22:15:25.460: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 22:15:25.460: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 22:15:25.460: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 22:15:25.460: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 22:15:25.460: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 22:15:25.460: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 22:15:25.460: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 22:15:25.460: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 22:15:25.460: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 22:15:25.460: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 22:15:25.460: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_fd4b became leader Jan 29 22:15:25.460: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_55acf became leader Jan 29 22:15:25.460: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ad28a became leader Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-68c9g to bootstrap-e2e-minion-group-prl8 Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 657.613501ms (657.634978ms including waiting) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Unhealthy: Liveness probe failed: Get "http://10.64.2.5:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Unhealthy: Liveness probe failed: Get "http://10.64.2.6:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-68c9g: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-68c9g_kube-system(3cb331ad-8640-4b25-8fca-df355093703f) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-c8fqq to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 956.296756ms (956.305606ms including waiting) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-c8fqq_kube-system(0836b571-aa7d-46e2-846d-c2ef4dcbfd76) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Liveness probe failed: Get "http://10.64.0.25:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Failed: Error: failed to get sandbox container task: no running task found: task b2c0d64625e18667eee1d0a95e38a58d19d52df858184ed33ed54f65ddc2f556 not found: not found Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-c8fqq_kube-system(0836b571-aa7d-46e2-846d-c2ef4dcbfd76) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-c8fqq: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-srg78 to bootstrap-e2e-minion-group-qp90 Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 679.018448ms (679.041957ms including waiting) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-srg78_kube-system(e0557a1e-0314-4bfe-8bff-7b1532b1bc85) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Unhealthy: Liveness probe failed: Get "http://10.64.3.10:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent-srg78: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container konnectivity-agent Jan 29 22:15:25.460: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-c8fqq Jan 29 22:15:25.460: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-srg78 Jan 29 22:15:25.460: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-68c9g Jan 29 22:15:25.460: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 22:15:25.460: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 22:15:25.460: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 22:15:25.460: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 22:15:25.460: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 22:15:25.460: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 22:15:25.460: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 22:15:25.460: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 22:15:25.460: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 22:15:25.460: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 22:15:25.460: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 22:15:25.460: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 22:15:25.460: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 22:15:25.460: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused Jan 29 22:15:25.460: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_17b47e1a-c3ff-42ad-b566-12beffed0288 became leader Jan 29 22:15:25.460: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a96406e5-1a2d-415b-8674-47808fdfe3fe became leader Jan 29 22:15:25.460: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_12be7f8d-96f2-4959-9cf6-ed72d48a5404 became leader Jan 29 22:15:25.460: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_bdf47f3d-8a3a-42dc-96dc-92193f43c416 became leader Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-8w5rj to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.575856713s (2.575872946s including waiting) Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-8w5rj_kube-system(7b9fb270-f42e-4c3d-9947-2b7804b28b97) Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-8w5rj_kube-system(7b9fb270-f42e-4c3d-9947-2b7804b28b97) Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985-8w5rj: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container autoscaler Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-8w5rj Jan 29 22:15:25.460: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 22:15:25.460: INFO: event for kube-dns: {endpoint-controller } FailedToUpdateEndpoint: Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints "kube-dns": the object has been modified; please apply your changes to the latest version and try again Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0h23_kube-system(a7d7c673a5678c3fd05bb8d81e613fd2) Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-0h23_kube-system(a7d7c673a5678c3fd05bb8d81e613fd2) Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-0h23: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-prl8_kube-system(af7f7d5ac5e113eedfb5c13ec70c059c) Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-prl8_kube-system(af7f7d5ac5e113eedfb5c13ec70c059c) Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-prl8: {kubelet bootstrap-e2e-minion-group-prl8} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Killing: Stopping container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-qp90_kube-system(fdc7414ccaf4c7060bb3a896ee9c4fdc) Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-proxy-bootstrap-e2e-minion-group-qp90: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container kube-proxy Jan 29 22:15:25.460: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 22:15:25.460: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 22:15:25.460: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 22:15:25.460: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 22:15:25.460: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 22:15:25.460: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e5aa9ff1-292b-44e6-a72b-8735e76d222a became leader Jan 29 22:15:25.460: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_68b1b904-ad42-431c-80bb-86195fbcd230 became leader Jan 29 22:15:25.460: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_65313fb6-cd85-4780-9c60-766a799fefea became leader Jan 29 22:15:25.460: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_4b1c330c-d507-49e9-bb07-682f604268de became leader Jan 29 22:15:25.460: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_add53644-d297-46e8-a997-cbf4dbb45277 became leader Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-br722 to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 1.254994621s (1.255003973s including waiting) Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container default-http-backend Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container default-http-backend Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Unhealthy: Liveness probe failed: Get "http://10.64.0.23:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99-br722: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container default-http-backend Jan 29 22:15:25.460: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-br722 Jan 29 22:15:25.460: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 22:15:25.460: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 22:15:25.460: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 22:15:25.460: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 22:15:25.460: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 22:15:25.460: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 22:15:25.460: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-7h8xr to bootstrap-e2e-minion-group-0h23 Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 728.14263ms (728.154201ms including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.813378152s (1.81340007s including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-7h8xr: {kubelet bootstrap-e2e-minion-group-0h23} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-gjgkr to bootstrap-e2e-minion-group-prl8 Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 725.023258ms (725.04726ms including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.833322514s (1.833331253s including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-gjgkr: {kubelet bootstrap-e2e-minion-group-prl8} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-n78nd to bootstrap-e2e-minion-group-qp90 Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 789.594528ms (789.609762ms including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.896285117s (1.896293813s including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} DNSConfigForming: Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 8.8.8.8 1.0.0.1 Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {node-controller } NodeNotReady: Node is not ready Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-n78nd: {kubelet bootstrap-e2e-minion-group-qp90} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-phrn6 to bootstrap-e2e-master Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 638.236648ms (638.252765ms including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.561997891s (1.56200326s including waiting) Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 22:15:25.460: INFO: event for metadata-proxy-v0.1-phrn6: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 22:15:25.460: IN