go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 05:03:21.901 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:00:49.727 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:00:49.727 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:00:49.727 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 05:00:49.728 Jan 29 05:00:49.728: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 05:00:49.729 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 05:00:49.864 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 05:00:49.945 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:00:50.03 (302ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 05:00:50.03 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 05:00:50.03 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 05:00:50.03 Jan 29 05:00:50.173: INFO: Getting bootstrap-e2e-minion-group-q3jk Jan 29 05:00:50.173: INFO: Getting bootstrap-e2e-minion-group-fr2s Jan 29 05:00:50.173: INFO: Getting bootstrap-e2e-minion-group-8xzv Jan 29 05:00:50.219: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-8xzv condition Ready to be true Jan 29 05:00:50.219: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-q3jk condition Ready to be true Jan 29 05:00:50.219: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-fr2s condition Ready to be true Jan 29 05:00:50.266: INFO: Node bootstrap-e2e-minion-group-8xzv has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:00:50.266: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:00:50.266: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5sc67" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Node bootstrap-e2e-minion-group-q3jk has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bjzbd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Node bootstrap-e2e-minion-group-fr2s has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-4cpk6] Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-4cpk6] Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-4cpk6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xmtst" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.312: INFO: Pod "metadata-proxy-v0.1-5sc67": Phase="Running", Reason="", readiness=true. Elapsed: 45.962218ms Jan 29 05:00:50.312: INFO: Pod "metadata-proxy-v0.1-5sc67" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.316: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 48.408023ms Jan 29 05:00:50.316: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.317: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6": Phase="Running", Reason="", readiness=true. Elapsed: 50.150568ms Jan 29 05:00:50.317: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.317: INFO: Pod "metadata-proxy-v0.1-xmtst": Phase="Running", Reason="", readiness=true. Elapsed: 49.915808ms Jan 29 05:00:50.317: INFO: Pod "metadata-proxy-v0.1-xmtst" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.317: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 50.069103ms Jan 29 05:00:50.317: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.317: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-4cpk6] Jan 29 05:00:50.317: INFO: Getting external IP address for bootstrap-e2e-minion-group-fr2s Jan 29 05:00:50.317: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-fr2s(104.196.249.18:22) Jan 29 05:00:50.318: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=true. Elapsed: 50.606987ms Jan 29 05:00:50.318: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.318: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=true. Elapsed: 50.742024ms Jan 29 05:00:50.318: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.318: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:00:50.318: INFO: Getting external IP address for bootstrap-e2e-minion-group-8xzv Jan 29 05:00:50.318: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-8xzv(34.168.157.136:22) Jan 29 05:00:50.318: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=true. Elapsed: 51.286914ms Jan 29 05:00:50.318: INFO: Pod "metadata-proxy-v0.1-bjzbd" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.318: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:00:50.318: INFO: Getting external IP address for bootstrap-e2e-minion-group-q3jk Jan 29 05:00:50.318: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-q3jk(34.82.121.186:22) Jan 29 05:00:50.830: INFO: ssh prow@34.82.121.186:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 05:00:50.830: INFO: ssh prow@34.82.121.186:22: stdout: "" Jan 29 05:00:50.830: INFO: ssh prow@34.82.121.186:22: stderr: "" Jan 29 05:00:50.830: INFO: ssh prow@34.82.121.186:22: exit code: 0 Jan 29 05:00:50.830: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-q3jk condition Ready to be false Jan 29 05:00:50.834: INFO: ssh prow@104.196.249.18:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 05:00:50.834: INFO: ssh prow@104.196.249.18:22: stdout: "" Jan 29 05:00:50.834: INFO: ssh prow@104.196.249.18:22: stderr: "" Jan 29 05:00:50.834: INFO: ssh prow@104.196.249.18:22: exit code: 0 Jan 29 05:00:50.834: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-fr2s condition Ready to be false Jan 29 05:00:50.843: INFO: ssh prow@34.168.157.136:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 05:00:50.843: INFO: ssh prow@34.168.157.136:22: stdout: "" Jan 29 05:00:50.843: INFO: ssh prow@34.168.157.136:22: stderr: "" Jan 29 05:00:50.843: INFO: ssh prow@34.168.157.136:22: exit code: 0 Jan 29 05:00:50.843: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-8xzv condition Ready to be false Jan 29 05:00:50.872: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:50.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:50.886: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:52.916: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:52.920: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:52.929: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:54.959: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:54.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:54.973: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:57.002: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:57.006: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:57.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:59.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:59.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:59.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:01.098: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:01.098: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:01.108: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:03.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:03.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:03.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:05.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:05.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:05.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:07.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:07.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:07.240: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:09.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:09.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:09.284: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:11.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:11.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:11.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:13.376: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:13.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:13.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:15.421: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:15.421: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:15.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:17.473: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:17.473: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:17.474: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:19.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:19.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:19.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:21.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:21.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:21.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:23.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:23.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:23.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:25.663: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:25.663: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:25.663: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:27.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:27.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:27.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:29.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:29.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:29.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:31.804: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:31.804: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:31.805: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:33.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:33.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:33.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:35.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:35.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:35.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:37.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:37.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:37.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:39.991: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:39.991: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-q3jk condition Ready to be true Jan 29 05:01:39.991: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:40.034: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:42.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:42.036: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:42.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:44.161: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-fr2s condition Ready to be true Jan 29 05:01:44.162: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-8xzv condition Ready to be true Jan 29 05:01:44.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:44.297: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:44.299: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:46.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:46.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:46.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:48.302: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:48.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:48.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:50.346: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:50.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:50.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:52.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:52.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:52.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:54.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:54.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:54.537: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:56.475: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:56.577: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:56.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:58.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:58.620: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:58.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:00.564: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:00.663: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:00.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:02.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:02.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:02.709: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:04.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:04.802: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:04.802: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:06.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:06.848: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:06.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:08.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:08.893: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:08.893: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:10.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:10.942: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:10.942: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:12.828: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:12.987: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:12.987: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:14.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:15.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:15.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:16.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:17.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:17.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:18.961: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:19.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:19.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:21.006: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:21.181: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:21.181: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:23.049: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:23.227: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:23.227: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:25.093: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:25.271: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:25.271: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:27.137: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:27.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:27.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:29.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:29.362: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:29.362: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:31.229: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:31.408: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:31.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:33.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:33.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:33.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:35.318: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:35.508: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:35.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:37.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:37.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:37.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:39.407: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:39.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:39.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:41.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:41.644: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:41.644: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:43.493: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:43.691: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:43.691: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:45.554: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:45.737: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:45.737: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:47.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:47.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:47.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:49.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:49.831: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:49.831: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:51.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:51.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:51.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:53.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:53.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:55.771: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:55.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:55.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:57.815: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:58.012: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:58.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:59.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:03:00.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:03:00.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:03:01.902: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:03:02.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:03:02.104: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:03:03.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:03:04.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:03:04.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:03:06.112: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:03:06.112: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bjzbd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.112: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.165: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 52.153652ms Jan 29 05:03:06.165: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:06.165: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 52.345705ms Jan 29 05:03:06.165: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-4cpk6] Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-4cpk6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xmtst" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5sc67" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.254: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6": Phase="Running", Reason="", readiness=true. Elapsed: 55.538505ms Jan 29 05:03:06.254: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6" satisfied condition "running and ready, or succeeded" Jan 29 05:03:06.257: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=true. Elapsed: 58.375731ms Jan 29 05:03:06.257: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" satisfied condition "running and ready, or succeeded" Jan 29 05:03:06.257: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.54741ms Jan 29 05:03:06.257: INFO: Pod "metadata-proxy-v0.1-5sc67": Phase="Running", Reason="", readiness=true. Elapsed: 58.589562ms Jan 29 05:03:06.257: INFO: Pod "metadata-proxy-v0.1-5sc67" satisfied condition "running and ready, or succeeded" Jan 29 05:03:06.257: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:46 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:46 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:03:06.257: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:03:06.257: INFO: Reboot successful on node bootstrap-e2e-minion-group-8xzv Jan 29 05:03:06.257: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 58.615823ms Jan 29 05:03:06.257: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" satisfied condition "running and ready, or succeeded" Jan 29 05:03:06.257: INFO: Pod "metadata-proxy-v0.1-xmtst": Phase="Running", Reason="", readiness=true. Elapsed: 58.818017ms Jan 29 05:03:06.257: INFO: Pod "metadata-proxy-v0.1-xmtst" satisfied condition "running and ready, or succeeded" Jan 29 05:03:08.211: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 2.098008858s Jan 29 05:03:08.211: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:08.211: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 2.098256835s Jan 29 05:03:08.211: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:08.300: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 2.101712893s Jan 29 05:03:08.300: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 05:03:08.300: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-4cpk6] Jan 29 05:03:08.300: INFO: Reboot successful on node bootstrap-e2e-minion-group-fr2s Jan 29 05:03:10.209: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 4.09674797s Jan 29 05:03:10.209: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 4.09686069s Jan 29 05:03:10.209: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:10.209: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:12.210: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 6.097519266s Jan 29 05:03:12.210: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:12.210: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 6.097809604s Jan 29 05:03:12.210: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:14.211: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 8.098080168s Jan 29 05:03:14.211: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 8.098008785s Jan 29 05:03:14.211: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:14.211: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:16.210: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 10.097702076s Jan 29 05:03:16.210: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 10.097596252s Jan 29 05:03:16.210: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:16.210: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:18.209: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 12.096206255s Jan 29 05:03:18.209: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:18.209: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 12.096367258s Jan 29 05:03:18.209: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:20.206: INFO: Encountered non-retryable error while getting pod kube-system/metadata-proxy-v0.1-bjzbd: Get "https://34.145.111.53/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-bjzbd": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:20.206: INFO: Pod metadata-proxy-v0.1-bjzbd failed to be running and ready, or succeeded. Jan 29 05:03:20.206: INFO: Encountered non-retryable error while getting pod kube-system/kube-proxy-bootstrap-e2e-minion-group-q3jk: Get "https://34.145.111.53/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-q3jk": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:20.206: INFO: Pod kube-proxy-bootstrap-e2e-minion-group-q3jk failed to be running and ready, or succeeded. Jan 29 05:03:20.206: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:03:20.206: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-q3jk: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:56:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 05:01:38 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 05:00:46 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:56:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP:10.138.0.5 PodIPs:[{IP:10.138.0.5}] StartTime:2023-01-29 04:56:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 05:00:45 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 04:57:34 +0000 UTC,FinishedAt:2023-01-29 05:00:18 +0000 UTC,ContainerID:containerd://008f1d8d62f2eb36fa821751d247247dc74c2d220e6c0eeb678aabade7aeffe7,}} Ready:true RestartCount:3 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2 ImageID:sha256:ef97fd17575d534d8bc2960bbf1e744379f3ac6e86b9b97974e086f1516b75e5 ContainerID:containerd://7a87becb1a04aebf90cc9b182c249c763496c5468b6c9073a9c19bbe2aa19d53 Started:0xc00511360f}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 05:03:20.246: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-q3jk/kube-proxy, err: Get "https://34.145.111.53/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-q3jk/log?container=kube-proxy&previous=false": dial tcp 34.145.111.53:443: connect: connection refused: Jan 29 05:03:20.246: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-q3jk/kube-proxy, err: Get "https://34.145.111.53/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-q3jk/log?container=kube-proxy&previous=false": dial tcp 34.145.111.53:443: connect: connection refused: Jan 29 05:03:20.246: INFO: Status for not ready pod kube-system/metadata-proxy-v0.1-bjzbd: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:56:09 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 05:01:38 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:56:12 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:56:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP:10.138.0.5 PodIPs:[{IP:10.138.0.5}] StartTime:2023-01-29 04:56:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:metadata-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 04:56:10 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:registry.k8s.io/metadata-proxy:v0.1.12 ImageID:registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a ContainerID:containerd://fb8ed2fa2de1544532f7afd3a47b7074ccacdcbd143ee811ac992897826d03ed Started:0xc00511333a} {Name:prometheus-to-sd-exporter State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 04:56:12 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1 ImageID:gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 ContainerID:containerd://cd2bdf5eb970e6bd6ca24207f63424c55c0ecebfdfdde3afe92c9908d4a64de9 Started:0xc00511333b}] QOSClass:Guaranteed EphemeralContainerStatuses:[]} Jan 29 05:03:20.286: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-bjzbd/metadata-proxy, err: Get "https://34.145.111.53/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-bjzbd/log?container=metadata-proxy&previous=false": dial tcp 34.145.111.53:443: connect: connection refused: Jan 29 05:03:20.325: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-bjzbd/prometheus-to-sd-exporter, err: Get "https://34.145.111.53/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-bjzbd/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 34.145.111.53:443: connect: connection refused: Jan 29 05:03:20.325: INFO: Node bootstrap-e2e-minion-group-q3jk failed reboot test. Jan 29 05:03:20.325: INFO: Executing termination hook on nodes Jan 29 05:03:20.325: INFO: Getting external IP address for bootstrap-e2e-minion-group-8xzv Jan 29 05:03:20.325: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-8xzv(34.168.157.136:22) Jan 29 05:03:20.856: INFO: ssh prow@34.168.157.136:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 05:03:20.856: INFO: ssh prow@34.168.157.136:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 05:01:00 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 05:03:20.856: INFO: ssh prow@34.168.157.136:22: stderr: "" Jan 29 05:03:20.856: INFO: ssh prow@34.168.157.136:22: exit code: 0 Jan 29 05:03:20.856: INFO: Getting external IP address for bootstrap-e2e-minion-group-fr2s Jan 29 05:03:20.856: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-fr2s(104.196.249.18:22) Jan 29 05:03:21.380: INFO: ssh prow@104.196.249.18:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 05:03:21.380: INFO: ssh prow@104.196.249.18:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 05:01:00 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 05:03:21.380: INFO: ssh prow@104.196.249.18:22: stderr: "" Jan 29 05:03:21.380: INFO: ssh prow@104.196.249.18:22: exit code: 0 Jan 29 05:03:21.380: INFO: Getting external IP address for bootstrap-e2e-minion-group-q3jk Jan 29 05:03:21.380: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-q3jk(34.82.121.186:22) Jan 29 05:03:21.900: INFO: ssh prow@34.82.121.186:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 05:03:21.900: INFO: ssh prow@34.82.121.186:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 05:01:00 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 05:03:21.900: INFO: ssh prow@34.82.121.186:22: stderr: "" Jan 29 05:03:21.900: INFO: ssh prow@34.82.121.186:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 05:03:21.901 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 05:03:21.901 (2m31.871s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:03:21.901 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 05:03:21.901 Jan 29 05:03:21.941: INFO: Unexpected error: <*url.Error | 0xc003640210>: { Op: "Get", URL: "https://34.145.111.53/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc002996050>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002ff0db0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 145, 111, 53], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00126a080>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.145.111.53/api/v1/namespaces/kube-system/events": dial tcp 34.145.111.53:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/29/23 05:03:21.941 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:03:21.941 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:03:21.941 Jan 29 05:03:21.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:03:21.981 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 05:03:21.981 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 05:03:21.981 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:03:21.981 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:03:21.981 STEP: Collecting events from namespace "reboot-4689". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 05:03:21.981 Jan 29 05:03:22.021: INFO: Unexpected error: failed to list events in namespace "reboot-4689": <*url.Error | 0xc002ff0de0>: { Op: "Get", URL: "https://34.145.111.53/api/v1/namespaces/reboot-4689/events", Err: <*net.OpError | 0xc00352d950>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003232840>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 145, 111, 53], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0011c38a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:03:22.021 (40ms) [FAILED] failed to list events in namespace "reboot-4689": Get "https://34.145.111.53/api/v1/namespaces/reboot-4689/events": dial tcp 34.145.111.53:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/29/23 05:03:22.021 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:03:22.021 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:03:22.021 STEP: Destroying namespace "reboot-4689" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 05:03:22.021 [FAILED] Couldn't delete ns: "reboot-4689": Delete "https://34.145.111.53/api/v1/namespaces/reboot-4689": dial tcp 34.145.111.53:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.145.111.53/api/v1/namespaces/reboot-4689", Err:(*net.OpError)(0xc0029966e0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/29/23 05:03:22.062 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:03:22.062 (41ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:03:22.062 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:03:22.062 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sdropping\sall\sinbound\spackets\sfor\sa\swhile\sand\sensure\sthey\sfunction\safterwards$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 05:03:21.901 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:00:49.727 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:00:49.727 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:00:49.727 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 05:00:49.728 Jan 29 05:00:49.728: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 05:00:49.729 STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 05:00:49.864 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 05:00:49.945 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:00:50.03 (302ms) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 05:00:50.03 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 05:00:50.03 (0s) > Enter [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 05:00:50.03 Jan 29 05:00:50.173: INFO: Getting bootstrap-e2e-minion-group-q3jk Jan 29 05:00:50.173: INFO: Getting bootstrap-e2e-minion-group-fr2s Jan 29 05:00:50.173: INFO: Getting bootstrap-e2e-minion-group-8xzv Jan 29 05:00:50.219: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-8xzv condition Ready to be true Jan 29 05:00:50.219: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-q3jk condition Ready to be true Jan 29 05:00:50.219: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-fr2s condition Ready to be true Jan 29 05:00:50.266: INFO: Node bootstrap-e2e-minion-group-8xzv has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:00:50.266: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:00:50.266: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5sc67" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Node bootstrap-e2e-minion-group-q3jk has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bjzbd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Node bootstrap-e2e-minion-group-fr2s has 4 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-4cpk6] Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-4cpk6] Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-4cpk6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xmtst" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.267: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:00:50.312: INFO: Pod "metadata-proxy-v0.1-5sc67": Phase="Running", Reason="", readiness=true. Elapsed: 45.962218ms Jan 29 05:00:50.312: INFO: Pod "metadata-proxy-v0.1-5sc67" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.316: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 48.408023ms Jan 29 05:00:50.316: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.317: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6": Phase="Running", Reason="", readiness=true. Elapsed: 50.150568ms Jan 29 05:00:50.317: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.317: INFO: Pod "metadata-proxy-v0.1-xmtst": Phase="Running", Reason="", readiness=true. Elapsed: 49.915808ms Jan 29 05:00:50.317: INFO: Pod "metadata-proxy-v0.1-xmtst" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.317: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 50.069103ms Jan 29 05:00:50.317: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.317: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-4cpk6] Jan 29 05:00:50.317: INFO: Getting external IP address for bootstrap-e2e-minion-group-fr2s Jan 29 05:00:50.317: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-fr2s(104.196.249.18:22) Jan 29 05:00:50.318: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=true. Elapsed: 50.606987ms Jan 29 05:00:50.318: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.318: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=true. Elapsed: 50.742024ms Jan 29 05:00:50.318: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.318: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:00:50.318: INFO: Getting external IP address for bootstrap-e2e-minion-group-8xzv Jan 29 05:00:50.318: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-8xzv(34.168.157.136:22) Jan 29 05:00:50.318: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=true. Elapsed: 51.286914ms Jan 29 05:00:50.318: INFO: Pod "metadata-proxy-v0.1-bjzbd" satisfied condition "running and ready, or succeeded" Jan 29 05:00:50.318: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:00:50.318: INFO: Getting external IP address for bootstrap-e2e-minion-group-q3jk Jan 29 05:00:50.318: INFO: SSH "\n\t\tnohup sh -c '\n\t\t\tset -x\n\t\t\tsleep 10\n\t\t\twhile true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done\n\t\t\twhile true; do sudo iptables -I INPUT 2 -j DROP && break; done\n\t\t\tdate\n\t\t\tsleep 120\n\t\t\twhile true; do sudo iptables -D INPUT -j DROP && break; done\n\t\t\twhile true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done\n\t\t' >/tmp/drop-inbound.log 2>&1 &\n\t\t" on bootstrap-e2e-minion-group-q3jk(34.82.121.186:22) Jan 29 05:00:50.830: INFO: ssh prow@34.82.121.186:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 05:00:50.830: INFO: ssh prow@34.82.121.186:22: stdout: "" Jan 29 05:00:50.830: INFO: ssh prow@34.82.121.186:22: stderr: "" Jan 29 05:00:50.830: INFO: ssh prow@34.82.121.186:22: exit code: 0 Jan 29 05:00:50.830: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-q3jk condition Ready to be false Jan 29 05:00:50.834: INFO: ssh prow@104.196.249.18:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 05:00:50.834: INFO: ssh prow@104.196.249.18:22: stdout: "" Jan 29 05:00:50.834: INFO: ssh prow@104.196.249.18:22: stderr: "" Jan 29 05:00:50.834: INFO: ssh prow@104.196.249.18:22: exit code: 0 Jan 29 05:00:50.834: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-fr2s condition Ready to be false Jan 29 05:00:50.843: INFO: ssh prow@34.168.157.136:22: command: nohup sh -c ' set -x sleep 10 while true; do sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT && break; done while true; do sudo iptables -I INPUT 2 -j DROP && break; done date sleep 120 while true; do sudo iptables -D INPUT -j DROP && break; done while true; do sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT && break; done ' >/tmp/drop-inbound.log 2>&1 & Jan 29 05:00:50.843: INFO: ssh prow@34.168.157.136:22: stdout: "" Jan 29 05:00:50.843: INFO: ssh prow@34.168.157.136:22: stderr: "" Jan 29 05:00:50.843: INFO: ssh prow@34.168.157.136:22: exit code: 0 Jan 29 05:00:50.843: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-8xzv condition Ready to be false Jan 29 05:00:50.872: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:50.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:50.886: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:52.916: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:52.920: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:52.929: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:54.959: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:54.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:54.973: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:57.002: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:57.006: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:57.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:59.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:59.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:00:59.064: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:01.098: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:01.098: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:01.108: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:03.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:03.146: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:03.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:05.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:05.192: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:05.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:07.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:07.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:07.240: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:09.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:09.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:09.284: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:11.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:11.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:11.331: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:13.376: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:13.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:13.377: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:15.421: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:15.421: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:15.422: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:17.473: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:17.473: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:17.474: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:19.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:19.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:19.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:21.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:21.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:21.567: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:23.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:23.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:23.616: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:25.663: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:25.663: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:25.663: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:27.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:27.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:27.711: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:29.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:29.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:29.758: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:31.804: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:31.804: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:31.805: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:33.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:33.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:33.850: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:35.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:35.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:35.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:37.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:37.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:37.945: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:39.991: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:39.991: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-q3jk condition Ready to be true Jan 29 05:01:39.991: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:40.034: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:42.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:42.036: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:01:42.078: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:44.161: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-fr2s condition Ready to be true Jan 29 05:01:44.162: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-8xzv condition Ready to be true Jan 29 05:01:44.211: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:44.297: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:44.299: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:46.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:46.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:46.344: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:48.302: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:48.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:48.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:50.346: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:50.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:50.434: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:52.390: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:52.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:52.480: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:54.433: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:54.533: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:54.537: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:56.475: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:56.577: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:56.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:58.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:01:58.620: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:01:58.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:00.564: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:00.663: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:00.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:02.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:02.707: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:02.709: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:04.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:04.802: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:04.802: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:06.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:06.848: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:06.849: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:08.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:08.893: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:08.893: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:10.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:10.942: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:10.942: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:12.828: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:12.987: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:12.987: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:14.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:15.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:15.035: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:16.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:17.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:17.080: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:18.961: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:19.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:19.129: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:21.006: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:21.181: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:21.181: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:23.049: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:23.227: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:23.227: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:25.093: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:25.271: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:25.271: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:27.137: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:27.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:27.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:29.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:29.362: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:29.362: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:31.229: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:31.408: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:31.410: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:33.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:33.453: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:33.456: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:35.318: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:35.508: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:35.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:37.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:37.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:37.555: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:39.407: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:39.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:39.600: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:41.450: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:41.644: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:41.644: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:43.493: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:43.691: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:43.691: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:45.554: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:45.737: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:45.737: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:47.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:47.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:47.786: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:49.641: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:49.831: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:49.831: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:51.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:51.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:51.877: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:53.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:53.921: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:55.771: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:55.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:55.966: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:57.815: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:02:58.012: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:58.013: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:02:59.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:01:38 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:03:00.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:03:00.058: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:03:01.902: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:03:02.103: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:03:02.104: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:03:03.948: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 05:01:44 +0000 UTC}]. Failure Jan 29 05:03:04.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:03:04.150: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:03:06.112: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:03:06.112: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bjzbd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.112: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.165: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 52.153652ms Jan 29 05:03:06.165: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:06.165: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 52.345705ms Jan 29 05:03:06.165: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-4cpk6] Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-4cpk6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xmtst" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5sc67" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.198: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:03:06.254: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6": Phase="Running", Reason="", readiness=true. Elapsed: 55.538505ms Jan 29 05:03:06.254: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6" satisfied condition "running and ready, or succeeded" Jan 29 05:03:06.257: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=true. Elapsed: 58.375731ms Jan 29 05:03:06.257: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" satisfied condition "running and ready, or succeeded" Jan 29 05:03:06.257: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.54741ms Jan 29 05:03:06.257: INFO: Pod "metadata-proxy-v0.1-5sc67": Phase="Running", Reason="", readiness=true. Elapsed: 58.589562ms Jan 29 05:03:06.257: INFO: Pod "metadata-proxy-v0.1-5sc67" satisfied condition "running and ready, or succeeded" Jan 29 05:03:06.257: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:46 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:46 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:03:06.257: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:03:06.257: INFO: Reboot successful on node bootstrap-e2e-minion-group-8xzv Jan 29 05:03:06.257: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 58.615823ms Jan 29 05:03:06.257: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" satisfied condition "running and ready, or succeeded" Jan 29 05:03:06.257: INFO: Pod "metadata-proxy-v0.1-xmtst": Phase="Running", Reason="", readiness=true. Elapsed: 58.818017ms Jan 29 05:03:06.257: INFO: Pod "metadata-proxy-v0.1-xmtst" satisfied condition "running and ready, or succeeded" Jan 29 05:03:08.211: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 2.098008858s Jan 29 05:03:08.211: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:08.211: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 2.098256835s Jan 29 05:03:08.211: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:08.300: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 2.101712893s Jan 29 05:03:08.300: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 05:03:08.300: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0 kube-dns-autoscaler-5f6455f985-4cpk6] Jan 29 05:03:08.300: INFO: Reboot successful on node bootstrap-e2e-minion-group-fr2s Jan 29 05:03:10.209: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 4.09674797s Jan 29 05:03:10.209: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 4.09686069s Jan 29 05:03:10.209: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:10.209: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:12.210: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 6.097519266s Jan 29 05:03:12.210: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:12.210: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 6.097809604s Jan 29 05:03:12.210: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:14.211: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 8.098080168s Jan 29 05:03:14.211: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 8.098008785s Jan 29 05:03:14.211: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:14.211: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:16.210: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 10.097702076s Jan 29 05:03:16.210: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 10.097596252s Jan 29 05:03:16.210: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:16.210: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:18.209: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 12.096206255s Jan 29 05:03:18.209: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:03:18.209: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 12.096367258s Jan 29 05:03:18.209: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:01:38 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:03:20.206: INFO: Encountered non-retryable error while getting pod kube-system/metadata-proxy-v0.1-bjzbd: Get "https://34.145.111.53/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-bjzbd": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:20.206: INFO: Pod metadata-proxy-v0.1-bjzbd failed to be running and ready, or succeeded. Jan 29 05:03:20.206: INFO: Encountered non-retryable error while getting pod kube-system/kube-proxy-bootstrap-e2e-minion-group-q3jk: Get "https://34.145.111.53/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-q3jk": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:20.206: INFO: Pod kube-proxy-bootstrap-e2e-minion-group-q3jk failed to be running and ready, or succeeded. Jan 29 05:03:20.206: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: false. Pods: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:03:20.206: INFO: Status for not ready pod kube-system/kube-proxy-bootstrap-e2e-minion-group-q3jk: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:56:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 05:01:38 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 05:00:46 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:56:08 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP:10.138.0.5 PodIPs:[{IP:10.138.0.5}] StartTime:2023-01-29 04:56:08 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:kube-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 05:00:45 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2023-01-29 04:57:34 +0000 UTC,FinishedAt:2023-01-29 05:00:18 +0000 UTC,ContainerID:containerd://008f1d8d62f2eb36fa821751d247247dc74c2d220e6c0eeb678aabade7aeffe7,}} Ready:true RestartCount:3 Image:registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2 ImageID:sha256:ef97fd17575d534d8bc2960bbf1e744379f3ac6e86b9b97974e086f1516b75e5 ContainerID:containerd://7a87becb1a04aebf90cc9b182c249c763496c5468b6c9073a9c19bbe2aa19d53 Started:0xc00511360f}] QOSClass:Burstable EphemeralContainerStatuses:[]} Jan 29 05:03:20.246: INFO: Retrieving log for container kube-system/kube-proxy-bootstrap-e2e-minion-group-q3jk/kube-proxy, err: Get "https://34.145.111.53/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-q3jk/log?container=kube-proxy&previous=false": dial tcp 34.145.111.53:443: connect: connection refused: Jan 29 05:03:20.246: INFO: Retrieving log for the last terminated container kube-system/kube-proxy-bootstrap-e2e-minion-group-q3jk/kube-proxy, err: Get "https://34.145.111.53/api/v1/namespaces/kube-system/pods/kube-proxy-bootstrap-e2e-minion-group-q3jk/log?container=kube-proxy&previous=false": dial tcp 34.145.111.53:443: connect: connection refused: Jan 29 05:03:20.246: INFO: Status for not ready pod kube-system/metadata-proxy-v0.1-bjzbd: {Phase:Running Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:56:09 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 05:01:38 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:56:12 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-29 04:56:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.138.0.5 PodIP:10.138.0.5 PodIPs:[{IP:10.138.0.5}] StartTime:2023-01-29 04:56:09 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:metadata-proxy State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 04:56:10 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:registry.k8s.io/metadata-proxy:v0.1.12 ImageID:registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a ContainerID:containerd://fb8ed2fa2de1544532f7afd3a47b7074ccacdcbd143ee811ac992897826d03ed Started:0xc00511333a} {Name:prometheus-to-sd-exporter State:{Waiting:nil Running:&ContainerStateRunning{StartedAt:2023-01-29 04:56:12 +0000 UTC,} Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:true RestartCount:0 Image:gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1 ImageID:gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 ContainerID:containerd://cd2bdf5eb970e6bd6ca24207f63424c55c0ecebfdfdde3afe92c9908d4a64de9 Started:0xc00511333b}] QOSClass:Guaranteed EphemeralContainerStatuses:[]} Jan 29 05:03:20.286: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-bjzbd/metadata-proxy, err: Get "https://34.145.111.53/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-bjzbd/log?container=metadata-proxy&previous=false": dial tcp 34.145.111.53:443: connect: connection refused: Jan 29 05:03:20.325: INFO: Retrieving log for container kube-system/metadata-proxy-v0.1-bjzbd/prometheus-to-sd-exporter, err: Get "https://34.145.111.53/api/v1/namespaces/kube-system/pods/metadata-proxy-v0.1-bjzbd/log?container=prometheus-to-sd-exporter&previous=false": dial tcp 34.145.111.53:443: connect: connection refused: Jan 29 05:03:20.325: INFO: Node bootstrap-e2e-minion-group-q3jk failed reboot test. Jan 29 05:03:20.325: INFO: Executing termination hook on nodes Jan 29 05:03:20.325: INFO: Getting external IP address for bootstrap-e2e-minion-group-8xzv Jan 29 05:03:20.325: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-8xzv(34.168.157.136:22) Jan 29 05:03:20.856: INFO: ssh prow@34.168.157.136:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 05:03:20.856: INFO: ssh prow@34.168.157.136:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 05:01:00 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 05:03:20.856: INFO: ssh prow@34.168.157.136:22: stderr: "" Jan 29 05:03:20.856: INFO: ssh prow@34.168.157.136:22: exit code: 0 Jan 29 05:03:20.856: INFO: Getting external IP address for bootstrap-e2e-minion-group-fr2s Jan 29 05:03:20.856: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-fr2s(104.196.249.18:22) Jan 29 05:03:21.380: INFO: ssh prow@104.196.249.18:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 05:03:21.380: INFO: ssh prow@104.196.249.18:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 05:01:00 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 05:03:21.380: INFO: ssh prow@104.196.249.18:22: stderr: "" Jan 29 05:03:21.380: INFO: ssh prow@104.196.249.18:22: exit code: 0 Jan 29 05:03:21.380: INFO: Getting external IP address for bootstrap-e2e-minion-group-q3jk Jan 29 05:03:21.380: INFO: SSH "cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log" on bootstrap-e2e-minion-group-q3jk(34.82.121.186:22) Jan 29 05:03:21.900: INFO: ssh prow@34.82.121.186:22: command: cat /tmp/drop-inbound.log && rm /tmp/drop-inbound.log Jan 29 05:03:21.900: INFO: ssh prow@34.82.121.186:22: stdout: "+ sleep 10\n+ true\n+ sudo iptables -I INPUT 1 -s 127.0.0.1 -j ACCEPT\n+ break\n+ true\n+ sudo iptables -I INPUT 2 -j DROP\n+ break\n+ date\nSun Jan 29 05:01:00 UTC 2023\n+ sleep 120\n+ true\n+ sudo iptables -D INPUT -j DROP\n+ break\n+ true\n+ sudo iptables -D INPUT -s 127.0.0.1 -j ACCEPT\n+ break\n" Jan 29 05:03:21.900: INFO: ssh prow@34.82.121.186:22: stderr: "" Jan 29 05:03:21.900: INFO: ssh prow@34.82.121.186:22: exit code: 0 [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 05:03:21.901 < Exit [It] each node by dropping all inbound packets for a while and ensure they function afterwards - test/e2e/cloud/gcp/reboot.go:136 @ 01/29/23 05:03:21.901 (2m31.871s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:03:21.901 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 05:03:21.901 Jan 29 05:03:21.941: INFO: Unexpected error: <*url.Error | 0xc003640210>: { Op: "Get", URL: "https://34.145.111.53/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc002996050>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc002ff0db0>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 145, 111, 53], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00126a080>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.145.111.53/api/v1/namespaces/kube-system/events": dial tcp 34.145.111.53:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/29/23 05:03:21.941 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:03:21.941 (40ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:03:21.941 Jan 29 05:03:21.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:03:21.981 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 05:03:21.981 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 05:03:21.981 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:03:21.981 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:03:21.981 STEP: Collecting events from namespace "reboot-4689". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 05:03:21.981 Jan 29 05:03:22.021: INFO: Unexpected error: failed to list events in namespace "reboot-4689": <*url.Error | 0xc002ff0de0>: { Op: "Get", URL: "https://34.145.111.53/api/v1/namespaces/reboot-4689/events", Err: <*net.OpError | 0xc00352d950>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc003232840>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 145, 111, 53], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0011c38a0>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:03:22.021 (40ms) [FAILED] failed to list events in namespace "reboot-4689": Get "https://34.145.111.53/api/v1/namespaces/reboot-4689/events": dial tcp 34.145.111.53:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/29/23 05:03:22.021 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:03:22.021 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:03:22.021 STEP: Destroying namespace "reboot-4689" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 05:03:22.021 [FAILED] Couldn't delete ns: "reboot-4689": Delete "https://34.145.111.53/api/v1/namespaces/reboot-4689": dial tcp 34.145.111.53:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.145.111.53/api/v1/namespaces/reboot-4689", Err:(*net.OpError)(0xc0029966e0)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/29/23 05:03:22.062 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:03:22.062 (41ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:03:22.062 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:03:22.062 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 05:09:30.998from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:03:22.132 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:03:22.132 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:03:22.132 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 05:03:22.132 Jan 29 05:03:22.132: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 05:03:22.135 Jan 29 05:03:22.174: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:24.216: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:26.216: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:28.215: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:30.216: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:32.214: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:34.214: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:36.214: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:38.216: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:40.216: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:42.215: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:44.215: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:46.216: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:48.215: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:50.214: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 05:05:30.929 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 05:05:31.021 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:05:31.103 (2m8.972s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 05:05:31.103 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 05:05:31.103 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 05:05:31.103 Jan 29 05:05:31.287: INFO: Getting bootstrap-e2e-minion-group-fr2s Jan 29 05:05:31.287: INFO: Getting bootstrap-e2e-minion-group-q3jk Jan 29 05:05:31.287: INFO: Getting bootstrap-e2e-minion-group-8xzv Jan 29 05:05:31.342: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-q3jk condition Ready to be true Jan 29 05:05:31.342: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-fr2s condition Ready to be true Jan 29 05:05:31.370: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-8xzv condition Ready to be true Jan 29 05:05:31.387: INFO: Node bootstrap-e2e-minion-group-q3jk has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:05:31.387: INFO: Node bootstrap-e2e-minion-group-fr2s has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-4cpk6 kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0] Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-4cpk6 kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0] Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bjzbd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-4cpk6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xmtst" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.413: INFO: Node bootstrap-e2e-minion-group-8xzv has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:05:31.413: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:05:31.413: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5sc67" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.414: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.436: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6": Phase="Running", Reason="", readiness=true. Elapsed: 49.215324ms Jan 29 05:05:31.436: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6" satisfied condition "running and ready, or succeeded" Jan 29 05:05:31.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 52.427942ms Jan 29 05:05:31.439: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.469635ms Jan 29 05:05:31.439: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=true. Elapsed: 52.511623ms Jan 29 05:05:31.439: INFO: Pod "metadata-proxy-v0.1-bjzbd" satisfied condition "running and ready, or succeeded" Jan 29 05:05:31.439: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:31.439: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:31.440: INFO: Pod "metadata-proxy-v0.1-xmtst": Phase="Running", Reason="", readiness=true. Elapsed: 52.442265ms Jan 29 05:05:31.440: INFO: Pod "metadata-proxy-v0.1-xmtst" satisfied condition "running and ready, or succeeded" Jan 29 05:05:31.440: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=true. Elapsed: 52.610726ms Jan 29 05:05:31.440: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" satisfied condition "running and ready, or succeeded" Jan 29 05:05:31.440: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:05:31.440: INFO: Getting external IP address for bootstrap-e2e-minion-group-q3jk Jan 29 05:05:31.440: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-q3jk(34.82.121.186:22) Jan 29 05:05:31.459: INFO: Pod "metadata-proxy-v0.1-5sc67": Phase="Running", Reason="", readiness=true. Elapsed: 45.409107ms Jan 29 05:05:31.459: INFO: Pod "metadata-proxy-v0.1-5sc67" satisfied condition "running and ready, or succeeded" Jan 29 05:05:31.459: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=false. Elapsed: 45.279875ms Jan 29 05:05:31.459: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-8xzv' on 'bootstrap-e2e-minion-group-8xzv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:05:31.954: INFO: ssh prow@34.82.121.186:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 05:05:31.954: INFO: ssh prow@34.82.121.186:22: stdout: "" Jan 29 05:05:31.954: INFO: ssh prow@34.82.121.186:22: stderr: "" Jan 29 05:05:31.954: INFO: ssh prow@34.82.121.186:22: exit code: 0 Jan 29 05:05:31.954: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-q3jk condition Ready to be false Jan 29 05:05:31.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:33.486: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 2.098955357s Jan 29 05:05:33.486: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.098998826s Jan 29 05:05:33.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:33.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:33.503: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=false. Elapsed: 2.088978611s Jan 29 05:05:33.503: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-8xzv' on 'bootstrap-e2e-minion-group-8xzv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:05:34.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:35.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.096037466s Jan 29 05:05:35.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:35.485: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 4.097588s Jan 29 05:05:35.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:35.503: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=false. Elapsed: 4.089376832s Jan 29 05:05:35.503: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-8xzv' on 'bootstrap-e2e-minion-group-8xzv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:05:36.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:37.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.095191048s Jan 29 05:05:37.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:37.483: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 6.096392189s Jan 29 05:05:37.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:37.502: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=false. Elapsed: 6.088444811s Jan 29 05:05:37.502: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-8xzv' on 'bootstrap-e2e-minion-group-8xzv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:05:38.135: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:39.485: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.098348224s Jan 29 05:05:39.485: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 8.098334844s Jan 29 05:05:39.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:39.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:39.502: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=false. Elapsed: 8.088494754s Jan 29 05:05:39.502: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-8xzv' on 'bootstrap-e2e-minion-group-8xzv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:05:40.178: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:41.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.095407928s Jan 29 05:05:41.484: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 10.097097523s Jan 29 05:05:41.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:41.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:41.503: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=false. Elapsed: 10.089120694s Jan 29 05:05:41.503: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-8xzv' on 'bootstrap-e2e-minion-group-8xzv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:05:42.222: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:43.485: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 12.098116119s Jan 29 05:05:43.485: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.098155551s Jan 29 05:05:43.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:43.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:43.503: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=true. Elapsed: 12.089415733s Jan 29 05:05:43.503: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" satisfied condition "running and ready, or succeeded" Jan 29 05:05:43.503: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:05:43.503: INFO: Getting external IP address for bootstrap-e2e-minion-group-8xzv Jan 29 05:05:43.503: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-8xzv(34.168.157.136:22) Jan 29 05:05:44.027: INFO: ssh prow@34.168.157.136:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 05:05:44.027: INFO: ssh prow@34.168.157.136:22: stdout: "" Jan 29 05:05:44.027: INFO: ssh prow@34.168.157.136:22: stderr: "" Jan 29 05:05:44.027: INFO: ssh prow@34.168.157.136:22: exit code: 0 Jan 29 05:05:44.027: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-8xzv condition Ready to be false Jan 29 05:05:44.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:44.266: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:45.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.095658726s Jan 29 05:05:45.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:45.485: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 14.097740825s Jan 29 05:05:45.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:46.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:46.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:47.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.095857286s Jan 29 05:05:47.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:47.484: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 16.097232378s Jan 29 05:05:47.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:48.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:48.356: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:49.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.095542097s Jan 29 05:05:49.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:49.484: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 18.09713701s Jan 29 05:05:49.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:50.201: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:50.400: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:51.486: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.099330334s Jan 29 05:05:51.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:51.486: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 20.099340382s Jan 29 05:05:51.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:52.244: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:52.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:53.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.095534163s Jan 29 05:05:53.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:53.484: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 22.096928005s Jan 29 05:05:53.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:54.287: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:54.488: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:55.485: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.098541945s Jan 29 05:05:55.485: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 24.098497871s Jan 29 05:05:55.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:55.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:56.332: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:56.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:57.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.09611812s Jan 29 05:05:57.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:57.484: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 26.097150917s Jan 29 05:05:57.484: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" satisfied condition "running and ready, or succeeded" Jan 29 05:05:58.375: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:58.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:59.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.095653516s Jan 29 05:05:59.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:00.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:00.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:01.485: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.097830237s Jan 29 05:06:01.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:02.462: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:02.659: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:03.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.095217s Jan 29 05:06:03.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:04.505: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:04.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:05.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.094518763s Jan 29 05:06:05.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:06.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:06.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:07.484: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.097026486s Jan 29 05:06:07.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:08.591: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:08.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:09.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.09464526s Jan 29 05:06:09.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:10.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:10.833: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:11.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.0944792s Jan 29 05:06:11.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:12.678: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:12.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:13.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.094515824s Jan 29 05:06:13.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:14.721: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:14.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:15.489: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.102021419s Jan 29 05:06:15.489: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:16.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:16.960: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:17.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.094599845s Jan 29 05:06:17.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:18.808: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:19.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:19.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.095379851s Jan 29 05:06:19.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:20.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:21.051: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-q3jk condition Ready to be true Jan 29 05:06:21.094: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:21.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.095102239s Jan 29 05:06:21.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:22.907: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:23.137: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:23.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.094236117s Jan 29 05:06:23.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:24.951: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:25.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:25.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.095223908s Jan 29 05:06:25.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:26.994: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:27.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:27.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.095096768s Jan 29 05:06:27.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:29.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:29.272: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:29.484: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.096783399s Jan 29 05:06:29.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:31.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:31.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:31.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.094758152s Jan 29 05:06:31.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:33.125: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:33.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:33.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.095084211s Jan 29 05:06:33.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:35.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:35.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:35.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.095229169s Jan 29 05:06:35.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:37.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:37.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:37.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.094768509s Jan 29 05:06:37.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:39.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:39.491: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.104335198s Jan 29 05:06:39.491: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:39.510: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:41.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:41.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.09568468s Jan 29 05:06:41.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:41.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:43.345: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:43.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.095199758s Jan 29 05:06:43.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:43.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:45.389: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:45.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.095688573s Jan 29 05:06:45.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:45.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:47.432: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:47.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.095820825s Jan 29 05:06:47.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:47.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:49.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:49.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.095131893s Jan 29 05:06:49.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:49.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:51.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.094880609s Jan 29 05:06:51.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:51.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:51.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:53.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.095700498s Jan 29 05:06:53.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:53.563: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:53.818: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:55.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.09506888s Jan 29 05:06:55.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:55.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:55.861: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:57.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.09478696s Jan 29 05:06:57.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:57.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:57.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:59.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.095821756s Jan 29 05:06:59.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:59.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:59.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:01.543: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.155802676s Jan 29 05:07:01.543: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:01.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:01.991: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:03.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.094873907s Jan 29 05:07:03.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:03.802: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:04.040: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:05.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.095577425s Jan 29 05:07:05.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:05.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:06.084: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:07.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.094787287s Jan 29 05:07:07.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:07.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:08.128: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:09.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.095402331s Jan 29 05:07:09.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:09.941: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:10.173: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:11.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.094674973s Jan 29 05:07:11.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:11.984: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:12.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:13.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.095475557s Jan 29 05:07:13.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:14.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:14.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:15.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 1m44.0947861s Jan 29 05:07:15.482: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 05:07:15.482: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-4cpk6 kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0] Jan 29 05:07:15.482: INFO: Getting external IP address for bootstrap-e2e-minion-group-fr2s Jan 29 05:07:15.482: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-fr2s(104.196.249.18:22) Jan 29 05:07:15.998: INFO: ssh prow@104.196.249.18:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 05:07:15.998: INFO: ssh prow@104.196.249.18:22: stdout: "" Jan 29 05:07:15.998: INFO: ssh prow@104.196.249.18:22: stderr: "" Jan 29 05:07:15.998: INFO: ssh prow@104.196.249.18:22: exit code: 0 Jan 29 05:07:15.998: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-fr2s condition Ready to be false Jan 29 05:07:16.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:16.083: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:16.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:18.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:18.126: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:18.347: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:20.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:20.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:20.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:22.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:22.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:22.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:24.218: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:24.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:24.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:26.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:26.302: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:26.521: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:28.306: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:28.346: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:28.563: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:30.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:30.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:30.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:32.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:32.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:32.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:34.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:34.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:34.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:36.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:36.521: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:36.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:38.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:38.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:38.783: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:40.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:40.608: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:40.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:42.609: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:42.652: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:42.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:44.652: INFO: Node bootstrap-e2e-minion-group-8xzv didn't reach desired Ready condition status (false) within 2m0s Jan 29 05:07:44.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:44.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:46.698: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:46.960: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:48.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:49.022: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:50.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:51.067: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:52.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:53.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:54.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:55.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:56.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:57.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:58.961: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:59.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:01.006: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:01.285: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:03.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:03.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:05.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:05.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:07.138: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:07.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:09.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:09.460: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:11.229: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:11.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:13.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:13.546: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:15.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:15.589: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:17.360: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:17.633: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:06:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:08:15 +0000 UTC}]. Failure Jan 29 05:08:19.403: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:19.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:06:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:08:15 +0000 UTC}]. Failure Jan 29 05:08:21.449: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:21.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:06:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:08:15 +0000 UTC}]. Failure Jan 29 05:08:23.493: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:23.766: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:06:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:08:15 +0000 UTC}]. Failure Jan 29 05:08:25.536: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:25.810: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:08:25.810: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bjzbd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:08:25.810: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:08:25.854: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 44.147849ms Jan 29 05:08:25.854: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 44.545267ms Jan 29 05:08:25.854: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:06:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:08:25.854: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:06:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:08:27.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:27.899: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=true. Elapsed: 2.089669007s Jan 29 05:08:27.899: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=true. Elapsed: 2.089248535s Jan 29 05:08:27.899: INFO: Pod "metadata-proxy-v0.1-bjzbd" satisfied condition "running and ready, or succeeded" Jan 29 05:08:27.899: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" satisfied condition "running and ready, or succeeded" Jan 29 05:08:27.899: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:08:27.899: INFO: Reboot successful on node bootstrap-e2e-minion-group-q3jk Jan 29 05:08:29.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:31.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:33.710: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:35.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:37.800: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:39.845: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:41.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:43.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:45.977: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:48.020: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:50.066: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:52.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:54.157: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:56.201: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-fr2s condition Ready to be true Jan 29 05:08:56.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:58.291: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:09:00.335: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:09:02.379: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:04.423: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:06.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:08.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:10.556: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:12.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:14.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:16.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:18.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:20.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:22.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:24.862: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:26.906: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-4cpk6 kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0] Jan 29 05:09:26.906: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:09:26.906: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-4cpk6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:09:26.906: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:09:26.906: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xmtst" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:09:26.951: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6": Phase="Running", Reason="", readiness=false. Elapsed: 44.618689ms Jan 29 05:09:26.951: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-4cpk6' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:08:55 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:09:24 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:09:26.952: INFO: Pod "metadata-proxy-v0.1-xmtst": Phase="Running", Reason="", readiness=false. Elapsed: 45.461225ms Jan 29 05:09:26.952: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-xmtst' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:08:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:09:26.952: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 45.936044ms Jan 29 05:09:26.952: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:08:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:09:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:09:26.952: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 45.696687ms Jan 29 05:09:26.952: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:58 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:58 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:09:28.997: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.090597534s Jan 29 05:09:28.997: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:08:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:09:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:09:28.997: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6": Phase="Running", Reason="", readiness=false. Elapsed: 2.090513438s Jan 29 05:09:28.997: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-4cpk6' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:08:55 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:09:24 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:09:28.998: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 2.091718132s Jan 29 05:09:28.998: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" satisfied condition "running and ready, or succeeded" Jan 29 05:09:28.999: INFO: Pod "metadata-proxy-v0.1-xmtst": Phase="Running", Reason="", readiness=true. Elapsed: 2.092615124s Jan 29 05:09:28.999: INFO: Pod "metadata-proxy-v0.1-xmtst" satisfied condition "running and ready, or succeeded" Jan 29 05:09:30.996: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 4.089987092s Jan 29 05:09:30.996: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 05:09:30.997: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6": Phase="Running", Reason="", readiness=true. Elapsed: 4.091184772s Jan 29 05:09:30.997: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6" satisfied condition "running and ready, or succeeded" Jan 29 05:09:30.997: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-4cpk6 kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0] Jan 29 05:09:30.997: INFO: Reboot successful on node bootstrap-e2e-minion-group-fr2s Jan 29 05:09:30.997: INFO: Node bootstrap-e2e-minion-group-8xzv failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 05:09:30.998 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 05:09:30.998 (3m59.895s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:09:30.998 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 05:09:30.998 Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-cgf5q to bootstrap-e2e-minion-group-q3jk Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 952.42369ms (952.43546ms including waiting) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Container coredns failed liveness probe, will be restarted Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-cgf5q Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-cgf5q Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-slgkj to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.898971681s (1.89898123s including waiting) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.3:8181/ready": dial tcp 10.64.1.3:8181: connect: connection refused Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.14:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Liveness probe failed: Get "http://10.64.1.14:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-slgkj_kube-system(dbbd495d-f306-4c8c-894e-7ffeed82522f) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.17:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-slgkj Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-slgkj Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.28:8181/ready": dial tcp 10.64.1.28:8181: connect: connection refused Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-slgkj Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-cgf5q Jan 29 05:09:31.067: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 05:09:31.067: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 05:09:31.067: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 05:09:31.067: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 05:09:31.067: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 05:09:31.067: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 05:09:31.067: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3ae10 became leader Jan 29 05:09:31.067: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_dffea became leader Jan 29 05:09:31.067: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ed5cc became leader Jan 29 05:09:31.067: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_794a2 became leader Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-6hl7x to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 3.144705382s (3.144721595s including waiting) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6hl7x_kube-system(52759282-0d41-4927-b752-92975d4abd4b) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Liveness probe failed: Get "http://10.64.1.11:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9b2fb to bootstrap-e2e-minion-group-8xzv Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 645.890763ms (645.907319ms including waiting) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "http://10.64.3.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9b2fb_kube-system(3a803d1f-02e7-4777-9121-bdfdc7214e10) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-fn54g to bootstrap-e2e-minion-group-q3jk Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 612.375349ms (612.383552ms including waiting) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Liveness probe failed: Get "http://10.64.2.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-6hl7x Jan 29 05:09:31.067: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9b2fb Jan 29 05:09:31.067: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-fn54g Jan 29 05:09:31.067: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 05:09:31.067: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 05:09:31.067: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 05:09:31.067: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 05:09:31.067: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 05:09:31.067: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 05:09:31.067: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 05:09:31.067: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 05:09:31.067: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 05:09:31.067: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 05:09:31.067: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 05:09:31.067: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 05:09:31.067: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 05:09:31.067: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 05:09:31.067: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 05:09:31.067: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 05:09:31.067: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 05:09:31.067: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_851f0e92-d2b2-4cde-86fc-61b887267173 became leader Jan 29 05:09:31.067: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_4df1cff4-9e2b-4aeb-9add-320edc370972 became leader Jan 29 05:09:31.067: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_92df568f-7c41-431d-807f-71ca5118c228 became leader Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-4cpk6 to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.762583689s (1.762598755s including waiting) Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container autoscaler Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container autoscaler Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container autoscaler Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-4cpk6 Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-4cpk6_kube-system(e3c2ac3f-c229-4e3c-b75e-20da721f6be0) Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-4cpk6 Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container autoscaler Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container autoscaler Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-4cpk6 Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-8xzv_kube-system(f235327fad7051b81c0d60b9bd4fc9cd) Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-fr2s_kube-system(4bc9af4e1f2e0f804199bc97b6d57205) Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-q3jk_kube-system(44fdbb00bb3eea51169ca9d04a5a869e) Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 05:09:31.067: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 05:09:31.067: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 05:09:31.067: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 05:09:31.067: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e085793b-531d-4e46-9a13-2df2b0a0cf3c became leader Jan 29 05:09:31.067: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_bae8712b-9a00-4c39-8044-92141d52bf42 became leader Jan 29 05:09:31.067: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_8a10b01f-10b3-404d-8242-a505ae074a1a became leader Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-nw9t6 to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 953.200505ms (953.207255ms including waiting) Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container default-http-backend Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container default-http-backend Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Liveness probe failed: Get "http://10.64.1.4:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-nw9t6 Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-nw9t6 Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container default-http-backend Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-nw9t6 Jan 29 05:09:31.067: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 05:09:31.067: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 05:09:31.067: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 05:09:31.067: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 05:09:31.067: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 05:09:31.067: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 05:09:31.067: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5sc67 to bootstrap-e2e-minion-group-8xzv Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 814.209742ms (814.233537ms including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.012691866s (2.01271786s including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bjzbd to bootstrap-e2e-minion-group-q3jk Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 688.204291ms (688.219796ms including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.604765022s (1.604774619s including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-kn874 to bootstrap-e2e-master Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 908.820679ms (908.830517ms including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.340504431s (2.340511774s including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-xmtst to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 680.221606ms (680.236479ms including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.718964429s (1.718982952s including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-kn874 Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-xmtst Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bjzbd Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5sc67 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-gsfr8 to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.583628574s (2.58363679s including waiting) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.516746563s (2.516753465s including waiting) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-gsfr8 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-gsfr8 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-jrjtd to bootstrap-e2e-minion-group-8xzv Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.400479222s (1.400489838s including waiting) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.068205376s (1.068216228s including waiting) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-jrjtd_kube-system(f2309b34-237d-44df-b1a4-7ec957702321) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: Get "https://10.64.3.11:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "https://10.64.3.11:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-jrjtd Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.082767621s (2.082775326s including waiting) Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container volume-snapshot-controller Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container volume-snapshot-controller Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container volume-snapshot-controller Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(846294c9-7431-4763-8373-c9c072cf9808) Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container volume-snapshot-controller Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container volume-snapshot-controller Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container volume-snapshot-controller Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:09:31.068 (69ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:09:31.068 Jan 29 05:09:31.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:09:31.114 (47ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 05:09:31.114 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 05:09:31.114 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:09:31.114 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:09:31.115 STEP: Collecting events from namespace "reboot-2860". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 05:09:31.115 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 05:09:31.157 Jan 29 05:09:31.200: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 05:09:31.200: INFO: Jan 29 05:09:31.252: INFO: Logging node info for node bootstrap-e2e-master Jan 29 05:09:31.296: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 573932df-4ac9-4a16-9c02-0cca288f19f4 2088 0 2023-01-29 04:56:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 04:56:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 04:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 05:07:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:07:01 +0000 UTC,LastTransitionTime:2023-01-29 04:56:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:07:01 +0000 UTC,LastTransitionTime:2023-01-29 04:56:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:07:01 +0000 UTC,LastTransitionTime:2023-01-29 04:56:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:07:01 +0000 UTC,LastTransitionTime:2023-01-29 04:56:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.145.111.53,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6902beac0ad0c174454f307f49ae755d,SystemUUID:6902beac-0ad0-c174-454f-307f49ae755d,BootID:8368e14e-fc42-4513-ba6d-e7ce07a08226,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:09:31.297: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 05:09:31.355: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 05:10:01.399: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" Jan 29 05:10:01.399: INFO: Logging node info for node bootstrap-e2e-minion-group-8xzv Jan 29 05:10:04.701: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8xzv 30426c99-1665-4753-a8aa-3e12ad653388 2281 0 2023-01-29 04:56:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8xzv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 05:01:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 05:08:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 05:08:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 05:08:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-minion-group-8xzv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:08:38 +0000 UTC,LastTransitionTime:2023-01-29 05:03:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:08:38 +0000 UTC,LastTransitionTime:2023-01-29 05:03:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:08:38 +0000 UTC,LastTransitionTime:2023-01-29 05:03:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:08:38 +0000 UTC,LastTransitionTime:2023-01-29 05:08:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.157.136,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8xzv.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8xzv.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:863ee453ce39c71dfd70eb604edc1f2d,SystemUUID:863ee453-ce39-c71d-fd70-eb604edc1f2d,BootID:c99c408e-5e89-47bc-b3e3-841c7ce746dd,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:10:04.702: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8xzv Jan 29 05:10:04.748: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8xzv Jan 29 05:10:07.723: INFO: metrics-server-v0.5.2-867b8754b9-jrjtd started at 2023-01-29 04:56:43 +0000 UTC (0+2 container statuses recorded) Jan 29 05:10:07.723: INFO: Container metrics-server ready: true, restart count 6 Jan 29 05:10:07.723: INFO: Container metrics-server-nanny ready: true, restart count 6 Jan 29 05:10:07.723: INFO: kube-proxy-bootstrap-e2e-minion-group-8xzv started at 2023-01-29 04:56:09 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:07.723: INFO: Container kube-proxy ready: false, restart count 4 Jan 29 05:10:07.723: INFO: metadata-proxy-v0.1-5sc67 started at 2023-01-29 04:56:10 +0000 UTC (0+2 container statuses recorded) Jan 29 05:10:07.723: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 05:10:07.723: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 05:10:07.723: INFO: konnectivity-agent-9b2fb started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:07.723: INFO: Container konnectivity-agent ready: true, restart count 5 Jan 29 05:10:07.916: INFO: Latency metrics for node bootstrap-e2e-minion-group-8xzv Jan 29 05:10:07.916: INFO: Logging node info for node bootstrap-e2e-minion-group-fr2s Jan 29 05:10:07.959: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-fr2s 0621ab1c-3d02-4018-837d-bc99627df4e9 2430 0 2023-01-29 04:56:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-fr2s kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 05:08:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 05:09:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 05:09:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 05:09:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-minion-group-fr2s,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.196.249.18,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-fr2s.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-fr2s.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cc854fd12580be1e80a1147a3c758d9,SystemUUID:7cc854fd-1258-0be1-e80a-1147a3c758d9,BootID:385095b8-987a-4073-bd07-cf580cd4c436,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:10:07.960: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-fr2s Jan 29 05:10:08.010: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-fr2s Jan 29 05:10:08.079: INFO: kube-proxy-bootstrap-e2e-minion-group-fr2s started at 2023-01-29 04:56:06 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.079: INFO: Container kube-proxy ready: true, restart count 5 Jan 29 05:10:08.079: INFO: l7-default-backend-8549d69d99-nw9t6 started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.079: INFO: Container default-http-backend ready: true, restart count 4 Jan 29 05:10:08.079: INFO: volume-snapshot-controller-0 started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.079: INFO: Container volume-snapshot-controller ready: false, restart count 7 Jan 29 05:10:08.079: INFO: kube-dns-autoscaler-5f6455f985-4cpk6 started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.079: INFO: Container autoscaler ready: true, restart count 3 Jan 29 05:10:08.079: INFO: coredns-6846b5b5f-slgkj started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.079: INFO: Container coredns ready: true, restart count 4 Jan 29 05:10:08.079: INFO: metadata-proxy-v0.1-xmtst started at 2023-01-29 04:56:07 +0000 UTC (0+2 container statuses recorded) Jan 29 05:10:08.079: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 05:10:08.079: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 05:10:08.079: INFO: konnectivity-agent-6hl7x started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.079: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 29 05:10:08.253: INFO: Latency metrics for node bootstrap-e2e-minion-group-fr2s Jan 29 05:10:08.253: INFO: Logging node info for node bootstrap-e2e-minion-group-q3jk Jan 29 05:10:08.295: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-q3jk c80d534d-cc20-420c-aa82-58825be6696f 2233 0 2023-01-29 04:56:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-q3jk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 05:06:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 05:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 05:08:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 05:08:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-minion-group-q3jk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.82.121.186,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-q3jk.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-q3jk.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f4bd8995b54066c53ba6b862f7599c91,SystemUUID:f4bd8995-b540-66c5-3ba6-b862f7599c91,BootID:ae270d5d-8783-4663-8a9c-cac3385f9d75,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:10:08.296: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-q3jk Jan 29 05:10:08.343: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-q3jk Jan 29 05:10:08.407: INFO: kube-proxy-bootstrap-e2e-minion-group-q3jk started at 2023-01-29 04:56:08 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.407: INFO: Container kube-proxy ready: true, restart count 4 Jan 29 05:10:08.407: INFO: metadata-proxy-v0.1-bjzbd started at 2023-01-29 04:56:09 +0000 UTC (0+2 container statuses recorded) Jan 29 05:10:08.407: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 05:10:08.407: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 05:10:08.407: INFO: konnectivity-agent-fn54g started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.407: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 29 05:10:08.407: INFO: coredns-6846b5b5f-cgf5q started at 2023-01-29 04:56:28 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.407: INFO: Container coredns ready: true, restart count 5 Jan 29 05:10:08.569: INFO: Latency metrics for node bootstrap-e2e-minion-group-q3jk END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:10:08.569 (37.454s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:10:08.569 (37.454s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:10:08.569 STEP: Destroying namespace "reboot-2860" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 05:10:08.569 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:10:08.616 (47ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:10:08.616 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:10:08.616 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 05:09:30.998from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:03:22.132 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:03:22.132 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:03:22.132 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 05:03:22.132 Jan 29 05:03:22.132: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 05:03:22.135 Jan 29 05:03:22.174: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:24.216: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:26.216: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:28.215: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:30.216: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:32.214: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:34.214: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:36.214: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:38.216: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:40.216: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:42.215: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:44.215: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:46.216: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:48.215: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:03:50.214: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 05:05:30.929 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 05:05:31.021 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:05:31.103 (2m8.972s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 05:05:31.103 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 05:05:31.103 (0s) > Enter [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 05:05:31.103 Jan 29 05:05:31.287: INFO: Getting bootstrap-e2e-minion-group-fr2s Jan 29 05:05:31.287: INFO: Getting bootstrap-e2e-minion-group-q3jk Jan 29 05:05:31.287: INFO: Getting bootstrap-e2e-minion-group-8xzv Jan 29 05:05:31.342: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-q3jk condition Ready to be true Jan 29 05:05:31.342: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-fr2s condition Ready to be true Jan 29 05:05:31.370: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-8xzv condition Ready to be true Jan 29 05:05:31.387: INFO: Node bootstrap-e2e-minion-group-q3jk has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:05:31.387: INFO: Node bootstrap-e2e-minion-group-fr2s has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-4cpk6 kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0] Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-4cpk6 kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0] Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bjzbd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-4cpk6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.387: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xmtst" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.413: INFO: Node bootstrap-e2e-minion-group-8xzv has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:05:31.413: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:05:31.413: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5sc67" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.414: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:05:31.436: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6": Phase="Running", Reason="", readiness=true. Elapsed: 49.215324ms Jan 29 05:05:31.436: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6" satisfied condition "running and ready, or succeeded" Jan 29 05:05:31.439: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 52.427942ms Jan 29 05:05:31.439: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.469635ms Jan 29 05:05:31.439: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=true. Elapsed: 52.511623ms Jan 29 05:05:31.439: INFO: Pod "metadata-proxy-v0.1-bjzbd" satisfied condition "running and ready, or succeeded" Jan 29 05:05:31.439: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:31.439: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:31.440: INFO: Pod "metadata-proxy-v0.1-xmtst": Phase="Running", Reason="", readiness=true. Elapsed: 52.442265ms Jan 29 05:05:31.440: INFO: Pod "metadata-proxy-v0.1-xmtst" satisfied condition "running and ready, or succeeded" Jan 29 05:05:31.440: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=true. Elapsed: 52.610726ms Jan 29 05:05:31.440: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" satisfied condition "running and ready, or succeeded" Jan 29 05:05:31.440: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:05:31.440: INFO: Getting external IP address for bootstrap-e2e-minion-group-q3jk Jan 29 05:05:31.440: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-q3jk(34.82.121.186:22) Jan 29 05:05:31.459: INFO: Pod "metadata-proxy-v0.1-5sc67": Phase="Running", Reason="", readiness=true. Elapsed: 45.409107ms Jan 29 05:05:31.459: INFO: Pod "metadata-proxy-v0.1-5sc67" satisfied condition "running and ready, or succeeded" Jan 29 05:05:31.459: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=false. Elapsed: 45.279875ms Jan 29 05:05:31.459: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-8xzv' on 'bootstrap-e2e-minion-group-8xzv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:05:31.954: INFO: ssh prow@34.82.121.186:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 05:05:31.954: INFO: ssh prow@34.82.121.186:22: stdout: "" Jan 29 05:05:31.954: INFO: ssh prow@34.82.121.186:22: stderr: "" Jan 29 05:05:31.954: INFO: ssh prow@34.82.121.186:22: exit code: 0 Jan 29 05:05:31.954: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-q3jk condition Ready to be false Jan 29 05:05:31.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:33.486: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 2.098955357s Jan 29 05:05:33.486: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.098998826s Jan 29 05:05:33.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:33.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:33.503: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=false. Elapsed: 2.088978611s Jan 29 05:05:33.503: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-8xzv' on 'bootstrap-e2e-minion-group-8xzv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:05:34.043: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:35.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 4.096037466s Jan 29 05:05:35.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:35.485: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 4.097588s Jan 29 05:05:35.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:35.503: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=false. Elapsed: 4.089376832s Jan 29 05:05:35.503: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-8xzv' on 'bootstrap-e2e-minion-group-8xzv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:05:36.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:37.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 6.095191048s Jan 29 05:05:37.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:37.483: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 6.096392189s Jan 29 05:05:37.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:37.502: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=false. Elapsed: 6.088444811s Jan 29 05:05:37.502: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-8xzv' on 'bootstrap-e2e-minion-group-8xzv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:05:38.135: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:39.485: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 8.098348224s Jan 29 05:05:39.485: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 8.098334844s Jan 29 05:05:39.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:39.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:39.502: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=false. Elapsed: 8.088494754s Jan 29 05:05:39.502: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-8xzv' on 'bootstrap-e2e-minion-group-8xzv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:05:40.178: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:41.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 10.095407928s Jan 29 05:05:41.484: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 10.097097523s Jan 29 05:05:41.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:41.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:41.503: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=false. Elapsed: 10.089120694s Jan 29 05:05:41.503: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-8xzv' on 'bootstrap-e2e-minion-group-8xzv' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:12 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:05:42.222: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:43.485: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 12.098116119s Jan 29 05:05:43.485: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 12.098155551s Jan 29 05:05:43.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:43.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:43.503: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=true. Elapsed: 12.089415733s Jan 29 05:05:43.503: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" satisfied condition "running and ready, or succeeded" Jan 29 05:05:43.503: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:05:43.503: INFO: Getting external IP address for bootstrap-e2e-minion-group-8xzv Jan 29 05:05:43.503: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-8xzv(34.168.157.136:22) Jan 29 05:05:44.027: INFO: ssh prow@34.168.157.136:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 05:05:44.027: INFO: ssh prow@34.168.157.136:22: stdout: "" Jan 29 05:05:44.027: INFO: ssh prow@34.168.157.136:22: stderr: "" Jan 29 05:05:44.027: INFO: ssh prow@34.168.157.136:22: exit code: 0 Jan 29 05:05:44.027: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-8xzv condition Ready to be false Jan 29 05:05:44.070: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:44.266: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:45.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 14.095658726s Jan 29 05:05:45.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:45.485: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 14.097740825s Jan 29 05:05:45.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:46.116: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:46.313: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:47.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 16.095857286s Jan 29 05:05:47.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:47.484: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 16.097232378s Jan 29 05:05:47.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:48.158: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:48.356: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:49.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 18.095542097s Jan 29 05:05:49.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:49.484: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 18.09713701s Jan 29 05:05:49.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:50.201: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:50.400: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:51.486: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 20.099330334s Jan 29 05:05:51.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:51.486: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 20.099340382s Jan 29 05:05:51.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:52.244: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:52.444: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:53.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 22.095534163s Jan 29 05:05:53.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:53.484: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 22.096928005s Jan 29 05:05:53.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:54.287: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:54.488: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:55.485: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 24.098541945s Jan 29 05:05:55.485: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 24.098497871s Jan 29 05:05:55.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:55.486: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:17 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:05:56.332: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:56.530: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:57.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 26.09611812s Jan 29 05:05:57.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:05:57.484: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 26.097150917s Jan 29 05:05:57.484: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" satisfied condition "running and ready, or succeeded" Jan 29 05:05:58.375: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:58.573: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:05:59.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 28.095653516s Jan 29 05:05:59.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:00.418: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:00.617: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:01.485: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 30.097830237s Jan 29 05:06:01.485: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:02.462: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:02.659: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:03.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 32.095217s Jan 29 05:06:03.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:04.505: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:04.703: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:05.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 34.094518763s Jan 29 05:06:05.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:06.548: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:06.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:07.484: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 36.097026486s Jan 29 05:06:07.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:08.591: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:08.790: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:09.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 38.09464526s Jan 29 05:06:09.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:10.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:10.833: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:11.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 40.0944792s Jan 29 05:06:11.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:12.678: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:12.875: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:13.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 42.094515824s Jan 29 05:06:13.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:14.721: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:14.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:15.489: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 44.102021419s Jan 29 05:06:15.489: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:16.765: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:16.960: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:17.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 46.094599845s Jan 29 05:06:17.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:18.808: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:19.004: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:19.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 48.095379851s Jan 29 05:06:19.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:20.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:21.051: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-q3jk condition Ready to be true Jan 29 05:06:21.094: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:21.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 50.095102239s Jan 29 05:06:21.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:22.907: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:23.137: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:23.481: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 52.094236117s Jan 29 05:06:23.481: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:24.951: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:25.185: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:25.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 54.095223908s Jan 29 05:06:25.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:26.994: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:27.228: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:27.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 56.095096768s Jan 29 05:06:27.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:29.037: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:29.272: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:29.484: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 58.096783399s Jan 29 05:06:29.484: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:31.082: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:31.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:31.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m0.094758152s Jan 29 05:06:31.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:33.125: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:33.361: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:33.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m2.095084211s Jan 29 05:06:33.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:35.168: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:35.404: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:35.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m4.095229169s Jan 29 05:06:35.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:37.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:37.448: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:37.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m6.094768509s Jan 29 05:06:37.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:39.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:39.491: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m8.104335198s Jan 29 05:06:39.491: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:39.510: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:41.301: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:41.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m10.09568468s Jan 29 05:06:41.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:41.553: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:43.345: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:43.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m12.095199758s Jan 29 05:06:43.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:43.598: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:45.389: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:45.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m14.095688573s Jan 29 05:06:45.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:45.642: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:47.432: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:47.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m16.095820825s Jan 29 05:06:47.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:47.686: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:49.476: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:49.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m18.095131893s Jan 29 05:06:49.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:49.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:51.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m20.094880609s Jan 29 05:06:51.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:51.520: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:51.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:53.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m22.095700498s Jan 29 05:06:53.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:53.563: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:53.818: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:55.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m24.09506888s Jan 29 05:06:55.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:55.606: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:55.861: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:57.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m26.09478696s Jan 29 05:06:57.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:57.650: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:57.905: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:06:59.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m28.095821756s Jan 29 05:06:59.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:06:59.715: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:06:59.949: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:01.543: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m30.155802676s Jan 29 05:07:01.543: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:01.759: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:01.991: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:03.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m32.094873907s Jan 29 05:07:03.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:03.802: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:04.040: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:05.483: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m34.095577425s Jan 29 05:07:05.483: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:05.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:06.084: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:07.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m36.094787287s Jan 29 05:07:07.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:07.890: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:08.128: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:09.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m38.095402331s Jan 29 05:07:09.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:09.941: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:10.173: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:11.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m40.094674973s Jan 29 05:07:11.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:11.984: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:12.216: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:13.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 1m42.095475557s Jan 29 05:07:13.482: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:04:20 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:07:14.039: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:14.261: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:15.482: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 1m44.0947861s Jan 29 05:07:15.482: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 05:07:15.482: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-4cpk6 kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0] Jan 29 05:07:15.482: INFO: Getting external IP address for bootstrap-e2e-minion-group-fr2s Jan 29 05:07:15.482: INFO: SSH "nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-fr2s(104.196.249.18:22) Jan 29 05:07:15.998: INFO: ssh prow@104.196.249.18:22: command: nohup sh -c 'sleep 10 && sudo reboot' >/dev/null 2>&1 & Jan 29 05:07:15.998: INFO: ssh prow@104.196.249.18:22: stdout: "" Jan 29 05:07:15.998: INFO: ssh prow@104.196.249.18:22: stderr: "" Jan 29 05:07:15.998: INFO: ssh prow@104.196.249.18:22: exit code: 0 Jan 29 05:07:15.998: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-fr2s condition Ready to be false Jan 29 05:07:16.045: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:16.083: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:16.303: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:18.089: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:18.126: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:18.347: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:20.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:20.170: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:20.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:22.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:22.213: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:22.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:24.218: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:24.257: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:24.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:26.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:26.302: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:26.521: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:28.306: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:28.346: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:28.563: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:30.350: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:30.391: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:30.607: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:32.393: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:32.435: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:32.651: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:34.436: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:34.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:34.695: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:36.479: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:36.521: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:36.738: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:38.522: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:38.565: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:38.783: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:40.566: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:40.608: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:40.826: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:42.609: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:42.652: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:42.870: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:44.652: INFO: Node bootstrap-e2e-minion-group-8xzv didn't reach desired Ready condition status (false) within 2m0s Jan 29 05:07:44.653: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:44.914: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:46.698: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:46.960: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:48.741: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:49.022: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:50.785: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:51.067: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:52.829: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:53.110: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:54.873: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:55.153: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:56.917: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:57.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:07:58.961: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:07:59.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:01.006: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:01.285: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:03.050: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:03.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:05.095: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:05.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:07.138: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:07.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:09.183: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:09.460: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:11.229: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:11.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:13.273: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:13.546: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:15.317: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:15.589: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:17.360: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:17.633: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:06:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:08:15 +0000 UTC}]. Failure Jan 29 05:08:19.403: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:19.677: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:06:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:08:15 +0000 UTC}]. Failure Jan 29 05:08:21.449: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:21.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:06:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:08:15 +0000 UTC}]. Failure Jan 29 05:08:23.493: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:23.766: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:06:20 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:08:15 +0000 UTC}]. Failure Jan 29 05:08:25.536: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:25.810: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:08:25.810: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bjzbd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:08:25.810: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:08:25.854: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=false. Elapsed: 44.147849ms Jan 29 05:08:25.854: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=false. Elapsed: 44.545267ms Jan 29 05:08:25.854: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-bjzbd' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:06:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:09 +0000 UTC }] Jan 29 05:08:25.854: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-q3jk' on 'bootstrap-e2e-minion-group-q3jk' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:06:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:00:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:08 +0000 UTC }] Jan 29 05:08:27.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:27.899: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=true. Elapsed: 2.089669007s Jan 29 05:08:27.899: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=true. Elapsed: 2.089248535s Jan 29 05:08:27.899: INFO: Pod "metadata-proxy-v0.1-bjzbd" satisfied condition "running and ready, or succeeded" Jan 29 05:08:27.899: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" satisfied condition "running and ready, or succeeded" Jan 29 05:08:27.899: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:08:27.899: INFO: Reboot successful on node bootstrap-e2e-minion-group-q3jk Jan 29 05:08:29.623: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:31.666: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:33.710: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:35.756: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:37.800: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:39.845: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:41.888: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:43.933: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:45.977: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:48.020: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:50.066: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:52.114: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:54.157: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:08:56.201: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-fr2s condition Ready to be true Jan 29 05:08:56.248: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:08:58.291: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:09:00.335: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:09:02.379: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:04.423: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:06.469: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:08.513: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:10.556: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:12.599: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:14.643: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:16.685: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:18.730: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:20.774: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:22.817: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoSchedule 2023-01-29 05:08:55 +0000 UTC} {node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:24.862: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 05:09:00 +0000 UTC}]. Failure Jan 29 05:09:26.906: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-4cpk6 kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0] Jan 29 05:09:26.906: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:09:26.906: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-4cpk6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:09:26.906: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:09:26.906: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xmtst" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:09:26.951: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6": Phase="Running", Reason="", readiness=false. Elapsed: 44.618689ms Jan 29 05:09:26.951: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-4cpk6' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:08:55 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:09:24 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:09:26.952: INFO: Pod "metadata-proxy-v0.1-xmtst": Phase="Running", Reason="", readiness=false. Elapsed: 45.461225ms Jan 29 05:09:26.952: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'metadata-proxy-v0.1-xmtst' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:08:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:09:26.952: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 45.936044ms Jan 29 05:09:26.952: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:08:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:09:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:09:26.952: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=false. Elapsed: 45.696687ms Jan 29 05:09:26.952: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-proxy-bootstrap-e2e-minion-group-fr2s' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:58 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:05:58 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:06 +0000 UTC }] Jan 29 05:09:28.997: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=false. Elapsed: 2.090597534s Jan 29 05:09:28.997: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'volume-snapshot-controller-0' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:08:55 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:09:24 +0000 UTC ContainersNotReady containers with unready status: [volume-snapshot-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:09:28.997: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6": Phase="Running", Reason="", readiness=false. Elapsed: 2.090513438s Jan 29 05:09:28.997: INFO: Error evaluating pod condition running and ready, or succeeded: pod 'kube-dns-autoscaler-5f6455f985-4cpk6' on 'bootstrap-e2e-minion-group-fr2s' didn't have condition {Ready True}; conditions: [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:08:55 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-29 05:09:24 +0000 UTC ContainersNotReady containers with unready status: [autoscaler]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-29 04:56:21 +0000 UTC }] Jan 29 05:09:28.998: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 2.091718132s Jan 29 05:09:28.998: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" satisfied condition "running and ready, or succeeded" Jan 29 05:09:28.999: INFO: Pod "metadata-proxy-v0.1-xmtst": Phase="Running", Reason="", readiness=true. Elapsed: 2.092615124s Jan 29 05:09:28.999: INFO: Pod "metadata-proxy-v0.1-xmtst" satisfied condition "running and ready, or succeeded" Jan 29 05:09:30.996: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 4.089987092s Jan 29 05:09:30.996: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 05:09:30.997: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6": Phase="Running", Reason="", readiness=true. Elapsed: 4.091184772s Jan 29 05:09:30.997: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6" satisfied condition "running and ready, or succeeded" Jan 29 05:09:30.997: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-4cpk6 kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0] Jan 29 05:09:30.997: INFO: Reboot successful on node bootstrap-e2e-minion-group-fr2s Jan 29 05:09:30.997: INFO: Node bootstrap-e2e-minion-group-8xzv failed reboot test. [FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 05:09:30.998 < Exit [It] each node by ordering clean reboot and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:97 @ 01/29/23 05:09:30.998 (3m59.895s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:09:30.998 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 05:09:30.998 Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-cgf5q to bootstrap-e2e-minion-group-q3jk Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 952.42369ms (952.43546ms including waiting) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Container coredns failed liveness probe, will be restarted Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-cgf5q Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-cgf5q Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-slgkj to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.898971681s (1.89898123s including waiting) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.3:8181/ready": dial tcp 10.64.1.3:8181: connect: connection refused Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.14:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Liveness probe failed: Get "http://10.64.1.14:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-slgkj_kube-system(dbbd495d-f306-4c8c-894e-7ffeed82522f) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.17:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-slgkj Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-slgkj Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container coredns Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.28:8181/ready": dial tcp 10.64.1.28:8181: connect: connection refused Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-slgkj Jan 29 05:09:31.067: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-cgf5q Jan 29 05:09:31.067: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 05:09:31.067: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 05:09:31.067: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 05:09:31.067: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 05:09:31.067: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 05:09:31.067: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 05:09:31.067: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 05:09:31.067: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3ae10 became leader Jan 29 05:09:31.067: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_dffea became leader Jan 29 05:09:31.067: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ed5cc became leader Jan 29 05:09:31.067: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_794a2 became leader Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-6hl7x to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 3.144705382s (3.144721595s including waiting) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6hl7x_kube-system(52759282-0d41-4927-b752-92975d4abd4b) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Liveness probe failed: Get "http://10.64.1.11:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9b2fb to bootstrap-e2e-minion-group-8xzv Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 645.890763ms (645.907319ms including waiting) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "http://10.64.3.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9b2fb_kube-system(3a803d1f-02e7-4777-9121-bdfdc7214e10) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-fn54g to bootstrap-e2e-minion-group-q3jk Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 612.375349ms (612.383552ms including waiting) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Liveness probe failed: Get "http://10.64.2.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container konnectivity-agent Jan 29 05:09:31.067: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-6hl7x Jan 29 05:09:31.067: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9b2fb Jan 29 05:09:31.067: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-fn54g Jan 29 05:09:31.067: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 05:09:31.067: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 05:09:31.067: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 05:09:31.067: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 05:09:31.067: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 05:09:31.067: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 05:09:31.067: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 05:09:31.067: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 05:09:31.067: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 05:09:31.067: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 05:09:31.067: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 05:09:31.067: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 05:09:31.067: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 05:09:31.067: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 05:09:31.067: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 05:09:31.067: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 05:09:31.067: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 05:09:31.067: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 05:09:31.067: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_851f0e92-d2b2-4cde-86fc-61b887267173 became leader Jan 29 05:09:31.067: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_4df1cff4-9e2b-4aeb-9add-320edc370972 became leader Jan 29 05:09:31.067: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_92df568f-7c41-431d-807f-71ca5118c228 became leader Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-4cpk6 to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.762583689s (1.762598755s including waiting) Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container autoscaler Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container autoscaler Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container autoscaler Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-4cpk6 Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-4cpk6_kube-system(e3c2ac3f-c229-4e3c-b75e-20da721f6be0) Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-4cpk6 Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container autoscaler Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container autoscaler Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-4cpk6 Jan 29 05:09:31.067: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-8xzv_kube-system(f235327fad7051b81c0d60b9bd4fc9cd) Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-fr2s_kube-system(4bc9af4e1f2e0f804199bc97b6d57205) Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-q3jk_kube-system(44fdbb00bb3eea51169ca9d04a5a869e) Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container kube-proxy Jan 29 05:09:31.067: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:09:31.067: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 05:09:31.067: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 05:09:31.067: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 05:09:31.067: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 05:09:31.067: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e085793b-531d-4e46-9a13-2df2b0a0cf3c became leader Jan 29 05:09:31.067: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_bae8712b-9a00-4c39-8044-92141d52bf42 became leader Jan 29 05:09:31.067: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_8a10b01f-10b3-404d-8242-a505ae074a1a became leader Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-nw9t6 to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 953.200505ms (953.207255ms including waiting) Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container default-http-backend Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container default-http-backend Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Liveness probe failed: Get "http://10.64.1.4:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-nw9t6 Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-nw9t6 Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container default-http-backend Jan 29 05:09:31.067: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-nw9t6 Jan 29 05:09:31.067: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 05:09:31.067: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 05:09:31.067: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 05:09:31.067: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 05:09:31.067: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 05:09:31.067: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 05:09:31.067: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5sc67 to bootstrap-e2e-minion-group-8xzv Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 814.209742ms (814.233537ms including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.012691866s (2.01271786s including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bjzbd to bootstrap-e2e-minion-group-q3jk Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 688.204291ms (688.219796ms including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.604765022s (1.604774619s including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-kn874 to bootstrap-e2e-master Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 908.820679ms (908.830517ms including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.340504431s (2.340511774s including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-xmtst to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 680.221606ms (680.236479ms including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.718964429s (1.718982952s including waiting) Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metadata-proxy Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container prometheus-to-sd-exporter Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-kn874 Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-xmtst Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bjzbd Jan 29 05:09:31.067: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5sc67 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-gsfr8 to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.583628574s (2.58363679s including waiting) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.516746563s (2.516753465s including waiting) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-gsfr8 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-gsfr8 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-jrjtd to bootstrap-e2e-minion-group-8xzv Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.400479222s (1.400489838s including waiting) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.068205376s (1.068216228s including waiting) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-jrjtd_kube-system(f2309b34-237d-44df-b1a4-7ec957702321) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server-nanny Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: Get "https://10.64.3.11:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "https://10.64.3.11:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-jrjtd Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 05:09:31.068: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-fr2s Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.082767621s (2.082775326s including waiting) Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container volume-snapshot-controller Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container volume-snapshot-controller Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container volume-snapshot-controller Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(846294c9-7431-4763-8373-c9c072cf9808) Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container volume-snapshot-controller Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container volume-snapshot-controller Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container volume-snapshot-controller Jan 29 05:09:31.068: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:09:31.068 (69ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:09:31.068 Jan 29 05:09:31.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:09:31.114 (47ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 05:09:31.114 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:35 @ 01/29/23 05:09:31.114 (0s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:09:31.114 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:09:31.115 STEP: Collecting events from namespace "reboot-2860". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 05:09:31.115 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 05:09:31.157 Jan 29 05:09:31.200: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 05:09:31.200: INFO: Jan 29 05:09:31.252: INFO: Logging node info for node bootstrap-e2e-master Jan 29 05:09:31.296: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 573932df-4ac9-4a16-9c02-0cca288f19f4 2088 0 2023-01-29 04:56:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 04:56:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 04:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 05:07:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:07:01 +0000 UTC,LastTransitionTime:2023-01-29 04:56:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:07:01 +0000 UTC,LastTransitionTime:2023-01-29 04:56:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:07:01 +0000 UTC,LastTransitionTime:2023-01-29 04:56:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:07:01 +0000 UTC,LastTransitionTime:2023-01-29 04:56:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.145.111.53,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6902beac0ad0c174454f307f49ae755d,SystemUUID:6902beac-0ad0-c174-454f-307f49ae755d,BootID:8368e14e-fc42-4513-ba6d-e7ce07a08226,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:09:31.297: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 05:09:31.355: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 05:10:01.399: INFO: Unable to retrieve kubelet pods for node bootstrap-e2e-master: error trying to reach service: context deadline exceeded: connection error: desc = "transport: Error while dialing dial unix /etc/srv/kubernetes/konnectivity-server/konnectivity-server.socket: connect: no such file or directory" Jan 29 05:10:01.399: INFO: Logging node info for node bootstrap-e2e-minion-group-8xzv Jan 29 05:10:04.701: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8xzv 30426c99-1665-4753-a8aa-3e12ad653388 2281 0 2023-01-29 04:56:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8xzv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 05:01:44 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 05:08:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 05:08:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 05:08:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-minion-group-8xzv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 05:08:37 +0000 UTC,LastTransitionTime:2023-01-29 05:08:36 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:08:38 +0000 UTC,LastTransitionTime:2023-01-29 05:03:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:08:38 +0000 UTC,LastTransitionTime:2023-01-29 05:03:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:08:38 +0000 UTC,LastTransitionTime:2023-01-29 05:03:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:08:38 +0000 UTC,LastTransitionTime:2023-01-29 05:08:38 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.157.136,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8xzv.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8xzv.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:863ee453ce39c71dfd70eb604edc1f2d,SystemUUID:863ee453-ce39-c71d-fd70-eb604edc1f2d,BootID:c99c408e-5e89-47bc-b3e3-841c7ce746dd,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:10:04.702: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8xzv Jan 29 05:10:04.748: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8xzv Jan 29 05:10:07.723: INFO: metrics-server-v0.5.2-867b8754b9-jrjtd started at 2023-01-29 04:56:43 +0000 UTC (0+2 container statuses recorded) Jan 29 05:10:07.723: INFO: Container metrics-server ready: true, restart count 6 Jan 29 05:10:07.723: INFO: Container metrics-server-nanny ready: true, restart count 6 Jan 29 05:10:07.723: INFO: kube-proxy-bootstrap-e2e-minion-group-8xzv started at 2023-01-29 04:56:09 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:07.723: INFO: Container kube-proxy ready: false, restart count 4 Jan 29 05:10:07.723: INFO: metadata-proxy-v0.1-5sc67 started at 2023-01-29 04:56:10 +0000 UTC (0+2 container statuses recorded) Jan 29 05:10:07.723: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 05:10:07.723: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 05:10:07.723: INFO: konnectivity-agent-9b2fb started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:07.723: INFO: Container konnectivity-agent ready: true, restart count 5 Jan 29 05:10:07.916: INFO: Latency metrics for node bootstrap-e2e-minion-group-8xzv Jan 29 05:10:07.916: INFO: Logging node info for node bootstrap-e2e-minion-group-fr2s Jan 29 05:10:07.959: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-fr2s 0621ab1c-3d02-4018-837d-bc99627df4e9 2430 0 2023-01-29 04:56:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-fr2s kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 05:08:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 05:09:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 05:09:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 05:09:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-minion-group-fr2s,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:22 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:23 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:23 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:23 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:09:23 +0000 UTC,LastTransitionTime:2023-01-29 05:09:23 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.196.249.18,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-fr2s.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-fr2s.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cc854fd12580be1e80a1147a3c758d9,SystemUUID:7cc854fd-1258-0be1-e80a-1147a3c758d9,BootID:385095b8-987a-4073-bd07-cf580cd4c436,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:10:07.960: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-fr2s Jan 29 05:10:08.010: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-fr2s Jan 29 05:10:08.079: INFO: kube-proxy-bootstrap-e2e-minion-group-fr2s started at 2023-01-29 04:56:06 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.079: INFO: Container kube-proxy ready: true, restart count 5 Jan 29 05:10:08.079: INFO: l7-default-backend-8549d69d99-nw9t6 started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.079: INFO: Container default-http-backend ready: true, restart count 4 Jan 29 05:10:08.079: INFO: volume-snapshot-controller-0 started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.079: INFO: Container volume-snapshot-controller ready: false, restart count 7 Jan 29 05:10:08.079: INFO: kube-dns-autoscaler-5f6455f985-4cpk6 started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.079: INFO: Container autoscaler ready: true, restart count 3 Jan 29 05:10:08.079: INFO: coredns-6846b5b5f-slgkj started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.079: INFO: Container coredns ready: true, restart count 4 Jan 29 05:10:08.079: INFO: metadata-proxy-v0.1-xmtst started at 2023-01-29 04:56:07 +0000 UTC (0+2 container statuses recorded) Jan 29 05:10:08.079: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 05:10:08.079: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 05:10:08.079: INFO: konnectivity-agent-6hl7x started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.079: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 29 05:10:08.253: INFO: Latency metrics for node bootstrap-e2e-minion-group-fr2s Jan 29 05:10:08.253: INFO: Logging node info for node bootstrap-e2e-minion-group-q3jk Jan 29 05:10:08.295: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-q3jk c80d534d-cc20-420c-aa82-58825be6696f 2233 0 2023-01-29 04:56:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-q3jk kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 05:06:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 05:08:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 05:08:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {node-problem-detector Update v1 2023-01-29 05:08:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-minion-group-q3jk,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:23 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:24 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:24 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:24 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:08:24 +0000 UTC,LastTransitionTime:2023-01-29 05:08:24 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:34.82.121.186,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-q3jk.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-q3jk.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f4bd8995b54066c53ba6b862f7599c91,SystemUUID:f4bd8995-b540-66c5-3ba6-b862f7599c91,BootID:ae270d5d-8783-4663-8a9c-cac3385f9d75,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:10:08.296: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-q3jk Jan 29 05:10:08.343: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-q3jk Jan 29 05:10:08.407: INFO: kube-proxy-bootstrap-e2e-minion-group-q3jk started at 2023-01-29 04:56:08 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.407: INFO: Container kube-proxy ready: true, restart count 4 Jan 29 05:10:08.407: INFO: metadata-proxy-v0.1-bjzbd started at 2023-01-29 04:56:09 +0000 UTC (0+2 container statuses recorded) Jan 29 05:10:08.407: INFO: Container metadata-proxy ready: true, restart count 1 Jan 29 05:10:08.407: INFO: Container prometheus-to-sd-exporter ready: true, restart count 1 Jan 29 05:10:08.407: INFO: konnectivity-agent-fn54g started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.407: INFO: Container konnectivity-agent ready: true, restart count 6 Jan 29 05:10:08.407: INFO: coredns-6846b5b5f-cgf5q started at 2023-01-29 04:56:28 +0000 UTC (0+1 container statuses recorded) Jan 29 05:10:08.407: INFO: Container coredns ready: true, restart count 5 Jan 29 05:10:08.569: INFO: Latency metrics for node bootstrap-e2e-minion-group-q3jk END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:10:08.569 (37.454s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:10:08.569 (37.454s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:10:08.569 STEP: Destroying namespace "reboot-2860" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 05:10:08.569 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:10:08.616 (47ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:10:08.616 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:10:08.616 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] wait for service account "default" in namespace "reboot-7631": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 05:12:15.028 There were additional failures detected after the initial failure. These are visible in the timelinefrom ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:10:08.701 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:10:08.701 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:10:08.701 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 05:10:08.701 Jan 29 05:10:08.701: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 05:10:08.703 Jan 29 05:12:15.028: INFO: Unexpected error: <*fmt.wrapError | 0xc00537a000>: { msg: "wait for service account \"default\" in namespace \"reboot-7631\": timed out waiting for the condition", err: <*errors.errorString | 0xc000111ce0>{ s: "timed out waiting for the condition", }, } [FAILED] wait for service account "default" in namespace "reboot-7631": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 05:12:15.028 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:12:15.028 (2m6.327s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:12:15.028 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 05:12:15.028 Jan 29 05:12:15.069: INFO: Unexpected error: <*url.Error | 0xc005402870>: { Op: "Get", URL: "https://34.145.111.53/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc004dc83c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004fbc510>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 145, 111, 53], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0010ba540>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.145.111.53/api/v1/namespaces/kube-system/events": dial tcp 34.145.111.53:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/29/23 05:12:15.069 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:12:15.069 (41ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:12:15.069 Jan 29 05:12:15.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:12:15.109 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:12:15.109 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:12:15.109 STEP: Collecting events from namespace "reboot-7631". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 05:12:15.109 Jan 29 05:12:15.148: INFO: Unexpected error: failed to list events in namespace "reboot-7631": <*url.Error | 0xc004fbc540>: { Op: "Get", URL: "https://34.145.111.53/api/v1/namespaces/reboot-7631/events", Err: <*net.OpError | 0xc00500e5f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005371410>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 145, 111, 53], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00537a300>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:12:15.149 (40ms) [FAILED] failed to list events in namespace "reboot-7631": Get "https://34.145.111.53/api/v1/namespaces/reboot-7631/events": dial tcp 34.145.111.53:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/29/23 05:12:15.149 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:12:15.149 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:12:15.149 STEP: Destroying namespace "reboot-7631" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 05:12:15.149 [FAILED] Couldn't delete ns: "reboot-7631": Delete "https://34.145.111.53/api/v1/namespaces/reboot-7631": dial tcp 34.145.111.53:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.145.111.53/api/v1/namespaces/reboot-7631", Err:(*net.OpError)(0xc004dc8b90)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/29/23 05:12:15.189 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:12:15.189 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:12:15.189 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:12:15.189 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sordering\sunclean\sreboot\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] wait for service account "default" in namespace "reboot-7631": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 05:12:15.028 There were additional failures detected after the initial failure. These are visible in the timelinefrom junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:10:08.701 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:10:08.701 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:10:08.701 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 05:10:08.701 Jan 29 05:10:08.701: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 05:10:08.703 Jan 29 05:12:15.028: INFO: Unexpected error: <*fmt.wrapError | 0xc00537a000>: { msg: "wait for service account \"default\" in namespace \"reboot-7631\": timed out waiting for the condition", err: <*errors.errorString | 0xc000111ce0>{ s: "timed out waiting for the condition", }, } [FAILED] wait for service account "default" in namespace "reboot-7631": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 05:12:15.028 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:12:15.028 (2m6.327s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:12:15.028 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 05:12:15.028 Jan 29 05:12:15.069: INFO: Unexpected error: <*url.Error | 0xc005402870>: { Op: "Get", URL: "https://34.145.111.53/api/v1/namespaces/kube-system/events", Err: <*net.OpError | 0xc004dc83c0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc004fbc510>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 145, 111, 53], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc0010ba540>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } [FAILED] Get "https://34.145.111.53/api/v1/namespaces/kube-system/events": dial tcp 34.145.111.53:443: connect: connection refused In [AfterEach] at: test/e2e/cloud/gcp/reboot.go:75 @ 01/29/23 05:12:15.069 < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:12:15.069 (41ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:12:15.069 Jan 29 05:12:15.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:12:15.109 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:12:15.109 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:12:15.109 STEP: Collecting events from namespace "reboot-7631". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 05:12:15.109 Jan 29 05:12:15.148: INFO: Unexpected error: failed to list events in namespace "reboot-7631": <*url.Error | 0xc004fbc540>: { Op: "Get", URL: "https://34.145.111.53/api/v1/namespaces/reboot-7631/events", Err: <*net.OpError | 0xc00500e5f0>{ Op: "dial", Net: "tcp", Source: nil, Addr: <*net.TCPAddr | 0xc005371410>{ IP: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 255, 255, 34, 145, 111, 53], Port: 443, Zone: "", }, Err: <*os.SyscallError | 0xc00537a300>{ Syscall: "connect", Err: <syscall.Errno>0x6f, }, }, } END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:12:15.149 (40ms) [FAILED] failed to list events in namespace "reboot-7631": Get "https://34.145.111.53/api/v1/namespaces/reboot-7631/events": dial tcp 34.145.111.53:443: connect: connection refused In [DeferCleanup (Each)] at: test/e2e/framework/debug/dump.go:44 @ 01/29/23 05:12:15.149 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:12:15.149 (40ms) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:12:15.149 STEP: Destroying namespace "reboot-7631" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 05:12:15.149 [FAILED] Couldn't delete ns: "reboot-7631": Delete "https://34.145.111.53/api/v1/namespaces/reboot-7631": dial tcp 34.145.111.53:443: connect: connection refused (&url.Error{Op:"Delete", URL:"https://34.145.111.53/api/v1/namespaces/reboot-7631", Err:(*net.OpError)(0xc004dc8b90)}) In [DeferCleanup (Each)] at: test/e2e/framework/framework.go:383 @ 01/29/23 05:12:15.189 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:12:15.189 (40ms) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:12:15.189 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:12:15.189 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sswitching\soff\sthe\snetwork\sinterface\sand\sensure\sthey\sfunction\supon\sswitch\son$'
[FAILED] wait for service account "default" in namespace "reboot-786": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 05:23:35.545from ginkgo_report.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:21:35.422 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:21:35.422 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:21:35.422 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 05:21:35.422 Jan 29 05:21:35.422: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 05:21:35.424 Jan 29 05:23:35.545: INFO: Unexpected error: <*fmt.wrapError | 0xc00537a000>: { msg: "wait for service account \"default\" in namespace \"reboot-786\": timed out waiting for the condition", err: <*errors.errorString | 0xc000111ce0>{ s: "timed out waiting for the condition", }, } [FAILED] wait for service account "default" in namespace "reboot-786": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 05:23:35.545 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:23:35.545 (2m0.123s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:23:35.545 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 05:23:35.545 Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-cgf5q to bootstrap-e2e-minion-group-q3jk Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 952.42369ms (952.43546ms including waiting) Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container coredns Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container coredns Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Container coredns failed liveness probe, will be restarted Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container coredns Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-cgf5q Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-cgf5q Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container coredns Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container coredns Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container coredns Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-cgf5q_kube-system(aeeef6ad-37df-4830-8ddf-a2fa49dc0afb) Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Readiness probe failed: Get "http://10.64.2.13:8181/ready": dial tcp 10.64.2.13:8181: connect: connection refused Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-cgf5q: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-slgkj to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.898971681s (1.89898123s including waiting) Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container coredns Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container coredns Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container coredns Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.3:8181/ready": dial tcp 10.64.1.3:8181: connect: connection refused Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.14:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Liveness probe failed: Get "http://10.64.1.14:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-slgkj_kube-system(dbbd495d-f306-4c8c-894e-7ffeed82522f) Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.17:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-slgkj Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-slgkj Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container coredns Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container coredns Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.28:8181/ready": dial tcp 10.64.1.28:8181: connect: connection refused Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container coredns Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-slgkj_kube-system(dbbd495d-f306-4c8c-894e-7ffeed82522f) Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-slgkj Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-cgf5q Jan 29 05:23:35.631: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 05:23:35.631: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 05:23:35.631: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 05:23:35.631: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 05:23:35.631: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 05:23:35.631: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 05:23:35.631: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 05:23:35.631: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3ae10 became leader Jan 29 05:23:35.631: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_dffea became leader Jan 29 05:23:35.631: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ed5cc became leader Jan 29 05:23:35.631: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_794a2 became leader Jan 29 05:23:35.631: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_5e980 became leader Jan 29 05:23:35.631: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_c0a4b became leader Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-6hl7x to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 3.144705382s (3.144721595s including waiting) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6hl7x_kube-system(52759282-0d41-4927-b752-92975d4abd4b) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Liveness probe failed: Get "http://10.64.1.11:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6hl7x_kube-system(52759282-0d41-4927-b752-92975d4abd4b) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9b2fb to bootstrap-e2e-minion-group-8xzv Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 645.890763ms (645.907319ms including waiting) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "http://10.64.3.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9b2fb_kube-system(3a803d1f-02e7-4777-9121-bdfdc7214e10) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9b2fb_kube-system(3a803d1f-02e7-4777-9121-bdfdc7214e10) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-fn54g to bootstrap-e2e-minion-group-q3jk Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 612.375349ms (612.383552ms including waiting) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Liveness probe failed: Get "http://10.64.2.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-fn54g_kube-system(5010991f-c4fa-4022-b57f-06a1df1b8839) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-6hl7x Jan 29 05:23:35.631: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9b2fb Jan 29 05:23:35.631: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-fn54g Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://127.0.0.1:8133/healthz": dial tcp 127.0.0.1:8133: connect: connection refused Jan 29 05:23:35.631: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 05:23:35.631: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 05:23:35.631: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 05:23:35.631: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 05:23:35.631: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 05:23:35.631: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 05:23:35.631: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 05:23:35.631: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 05:23:35.631: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 05:23:35.631: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 05:23:35.631: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 05:23:35.631: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 05:23:35.631: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 05:23:35.631: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_851f0e92-d2b2-4cde-86fc-61b887267173 became leader Jan 29 05:23:35.631: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_4df1cff4-9e2b-4aeb-9add-320edc370972 became leader Jan 29 05:23:35.631: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_92df568f-7c41-431d-807f-71ca5118c228 became leader Jan 29 05:23:35.631: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_60dcade0-c8ca-4fde-976e-1913e57f00ec became leader Jan 29 05:23:35.631: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a86ab92f-b8ec-4c76-831b-83c0d8467492 became leader Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-4cpk6 to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.762583689s (1.762598755s including waiting) Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container autoscaler Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container autoscaler Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container autoscaler Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-4cpk6 Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-4cpk6_kube-system(e3c2ac3f-c229-4e3c-b75e-20da721f6be0) Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-4cpk6 Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container autoscaler Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container autoscaler Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container autoscaler Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-4cpk6_kube-system(e3c2ac3f-c229-4e3c-b75e-20da721f6be0) Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-4cpk6 Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-8xzv_kube-system(f235327fad7051b81c0d60b9bd4fc9cd) Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-8xzv_kube-system(f235327fad7051b81c0d60b9bd4fc9cd) Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-fr2s_kube-system(4bc9af4e1f2e0f804199bc97b6d57205) Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-q3jk_kube-system(44fdbb00bb3eea51169ca9d04a5a869e) Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-q3jk_kube-system(44fdbb00bb3eea51169ca9d04a5a869e) Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 05:23:35.631: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 05:23:35.631: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 05:23:35.631: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 05:23:35.631: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e085793b-531d-4e46-9a13-2df2b0a0cf3c became leader Jan 29 05:23:35.631: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_bae8712b-9a00-4c39-8044-92141d52bf42 became leader Jan 29 05:23:35.631: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_8a10b01f-10b3-404d-8242-a505ae074a1a became leader Jan 29 05:23:35.631: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_19f6a6e9-ecb9-4903-9847-6c580f807c75 became leader Jan 29 05:23:35.631: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_566eeaec-37f0-4f51-ab7a-1360175e11f9 became leader Jan 29 05:23:35.631: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_0f0f0a28-ea8c-4c76-9364-eb4f5c2dd2b2 became leader Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-nw9t6 to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 953.200505ms (953.207255ms including waiting) Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container default-http-backend Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container default-http-backend Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Liveness probe failed: Get "http://10.64.1.4:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-nw9t6 Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-nw9t6 Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container default-http-backend Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container default-http-backend Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-nw9t6 Jan 29 05:23:35.631: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 05:23:35.631: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 05:23:35.631: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 05:23:35.631: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 05:23:35.631: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 05:23:35.631: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 05:23:35.631: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5sc67 to bootstrap-e2e-minion-group-8xzv Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 814.209742ms (814.233537ms including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.012691866s (2.01271786s including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bjzbd to bootstrap-e2e-minion-group-q3jk Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 688.204291ms (688.219796ms including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.604765022s (1.604774619s including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-kn874 to bootstrap-e2e-master Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 908.820679ms (908.830517ms including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.340504431s (2.340511774s including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-xmtst to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 680.221606ms (680.236479ms including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.718964429s (1.718982952s including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-kn874 Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-xmtst Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bjzbd Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5sc67 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-gsfr8 to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.583628574s (2.58363679s including waiting) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.516746563s (2.516753465s including waiting) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-gsfr8 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-gsfr8 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-jrjtd to bootstrap-e2e-minion-group-8xzv Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.400479222s (1.400489838s including waiting) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.068205376s (1.068216228s including waiting) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-jrjtd_kube-system(f2309b34-237d-44df-b1a4-7ec957702321) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: Get "https://10.64.3.11:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "https://10.64.3.11:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-jrjtd_kube-system(f2309b34-237d-44df-b1a4-7ec957702321) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-jrjtd_kube-system(f2309b34-237d-44df-b1a4-7ec957702321) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-jrjtd Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.082767621s (2.082775326s including waiting) Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container volume-snapshot-controller Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container volume-snapshot-controller Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container volume-snapshot-controller Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(846294c9-7431-4763-8373-c9c072cf9808) Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container volume-snapshot-controller Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container volume-snapshot-controller Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container volume-snapshot-controller Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(846294c9-7431-4763-8373-c9c072cf9808) Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:23:35.631 (86ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:23:35.631 Jan 29 05:23:35.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 29 05:23:35.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:35.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:35.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:37.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:37.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:37.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:41.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:41.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:41.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:43.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:43.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:43.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:45.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:45.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:45.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:47.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:47.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:47.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:55.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:55.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:55.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:57.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:57.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:57.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:59.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:59.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:59.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:01.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:01.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:01.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:03.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:03.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:03.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:07.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:07.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:07.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:09.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:09.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:09.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:11.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:11.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:11.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:13.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:13.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:13.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:15.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:15.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:15.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:17.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:17.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:17.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:19.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:19.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:19.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:21.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:21.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:21.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:23.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:23.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:23.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:25.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:25.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:25.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:27.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:27.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:27.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:29.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:29.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:29.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:31.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:31.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:31.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:33.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:33.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:33.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:35.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:35.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:35.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:37.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:37.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:37.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:41.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:41.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:41.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:43.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:43.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:43.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:45.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:45.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:45.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:47.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:47.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:47.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:53.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:53.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:53.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:55.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:55.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:55.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:57.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:57.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:57.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:59.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:59.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:59.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:01.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:01.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:01.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:03.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:03.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:03.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:07.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:07.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:07.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:09.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:09.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:09.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:11.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:11.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:11.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:13.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:13.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:13.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:15.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:15.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:15.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:17.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:17.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:17.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:19.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:19.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:19.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:21.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:21.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:21.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:23.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:23.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:23.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:25.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:25.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:25.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:27.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:27.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:27.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:29.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:29.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:29.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:31.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:31.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:31.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:33.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:33.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:33.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:35.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:35.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:35.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:37.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:37.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:37.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:41.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:41.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:41.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:43.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:43.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:43.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:45.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:45.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:45.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:47.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:47.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:47.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:55.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:55.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:55.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:57.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:57.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:57.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:59.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:59.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:59.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:01.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:01.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:01.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:03.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:03.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:03.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:07.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:07.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:07.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:09.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 05:26:08 +0000 UTC}]. Failure Jan 29 05:26:11.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 05:26:08 +0000 UTC}]. Failure < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:26:13.728 (2m38.096s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:26:13.728 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:26:13.728 STEP: Collecting events from namespace "reboot-786". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 05:26:13.728 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 05:26:13.771 Jan 29 05:26:13.813: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 05:26:13.813: INFO: Jan 29 05:26:13.881: INFO: Logging node info for node bootstrap-e2e-master Jan 29 05:26:13.924: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 573932df-4ac9-4a16-9c02-0cca288f19f4 3104 0 2023-01-29 04:56:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 04:56:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 04:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 05:22:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:22:41 +0000 UTC,LastTransitionTime:2023-01-29 04:56:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:22:41 +0000 UTC,LastTransitionTime:2023-01-29 04:56:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:22:41 +0000 UTC,LastTransitionTime:2023-01-29 04:56:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:22:41 +0000 UTC,LastTransitionTime:2023-01-29 04:56:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.145.111.53,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6902beac0ad0c174454f307f49ae755d,SystemUUID:6902beac-0ad0-c174-454f-307f49ae755d,BootID:8368e14e-fc42-4513-ba6d-e7ce07a08226,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:26:13.925: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 05:26:13.981: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 05:26:14.112: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 04:55:17 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container etcd-container ready: true, restart count 3 Jan 29 05:26:14.112: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 04:55:17 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container etcd-container ready: true, restart count 4 Jan 29 05:26:14.112: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 04:55:37 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container l7-lb-controller ready: true, restart count 8 Jan 29 05:26:14.112: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 04:55:17 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container konnectivity-server-container ready: true, restart count 5 Jan 29 05:26:14.112: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 04:55:17 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container kube-apiserver ready: true, restart count 2 Jan 29 05:26:14.112: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 04:55:37 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container kube-addon-manager ready: true, restart count 4 Jan 29 05:26:14.112: INFO: metadata-proxy-v0.1-kn874 started at 2023-01-29 04:56:34 +0000 UTC (0+2 container statuses recorded) Jan 29 05:26:14.112: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 05:26:14.112: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 05:26:14.112: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 04:55:17 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container kube-controller-manager ready: true, restart count 8 Jan 29 05:26:14.112: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 04:55:17 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container kube-scheduler ready: true, restart count 7 Jan 29 05:26:14.328: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 05:26:14.328: INFO: Logging node info for node bootstrap-e2e-minion-group-8xzv Jan 29 05:26:14.385: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8xzv 30426c99-1665-4753-a8aa-3e12ad653388 3221 0 2023-01-29 04:56:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8xzv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 05:14:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 05:25:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 05:26:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-29 05:26:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} }]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-minion-group-8xzv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:True,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:25:25 +0000 UTC,Reason:FrequentKubeletRestart,Message:Found 7 matching logs, which meets the threshold of 5,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.157.136,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8xzv.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8xzv.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:863ee453ce39c71dfd70eb604edc1f2d,SystemUUID:863ee453-ce39-c71d-fd70-eb604edc1f2d,BootID:fb37bfad-63dd-4512-ad64-a121dafde00b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:26:14.386: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8xzv Jan 29 05:26:14.445: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8xzv Jan 29 05:26:14.553: INFO: konnectivity-agent-9b2fb started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.553: INFO: Container konnectivity-agent ready: true, restart count 7 Jan 29 05:26:14.553: INFO: metrics-server-v0.5.2-867b8754b9-jrjtd started at 2023-01-29 04:56:43 +0000 UTC (0+2 container statuses recorded) Jan 29 05:26:14.553: INFO: Container metrics-server ready: false, restart count 8 Jan 29 05:26:14.553: INFO: Container metrics-server-nanny ready: false, restart count 8 Jan 29 05:26:14.553: INFO: kube-proxy-bootstrap-e2e-minion-group-8xzv started at 2023-01-29 04:56:09 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.553: INFO: Container kube-proxy ready: true, restart count 6 Jan 29 05:26:14.553: INFO: metadata-proxy-v0.1-5sc67 started at 2023-01-29 04:56:10 +0000 UTC (0+2 container statuses recorded) Jan 29 05:26:14.553: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 05:26:14.553: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 05:26:19.048: INFO: Latency metrics for node bootstrap-e2e-minion-group-8xzv Jan 29 05:26:19.048: INFO: Logging node info for node bootstrap-e2e-minion-group-fr2s Jan 29 05:26:19.091: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-fr2s 0621ab1c-3d02-4018-837d-bc99627df4e9 3268 0 2023-01-29 04:56:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-fr2s kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 05:14:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 05:25:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 05:26:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 05:26:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-minion-group-fr2s,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:True,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:25:25 +0000 UTC,Reason:FrequentKubeletRestart,Message:Found 7 matching logs, which meets the threshold of 5,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.196.249.18,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-fr2s.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-fr2s.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cc854fd12580be1e80a1147a3c758d9,SystemUUID:7cc854fd-1258-0be1-e80a-1147a3c758d9,BootID:3d14e660-eaa8-4058-979f-f9970caf9460,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:26:19.091: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-fr2s Jan 29 05:26:19.140: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-fr2s Jan 29 05:26:19.304: INFO: coredns-6846b5b5f-slgkj started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:19.304: INFO: Container coredns ready: true, restart count 6 Jan 29 05:26:19.304: INFO: kube-dns-autoscaler-5f6455f985-4cpk6 started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:19.304: INFO: Container autoscaler ready: true, restart count 5 Jan 29 05:26:19.304: INFO: metadata-proxy-v0.1-xmtst started at 2023-01-29 04:56:07 +0000 UTC (0+2 container statuses recorded) Jan 29 05:26:19.304: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 05:26:19.304: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 05:26:19.304: INFO: konnectivity-agent-6hl7x started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:19.304: INFO: Container konnectivity-agent ready: true, restart count 9 Jan 29 05:26:19.304: INFO: kube-proxy-bootstrap-e2e-minion-group-fr2s started at 2023-01-29 04:56:06 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:19.304: INFO: Container kube-proxy ready: true, restart count 6 Jan 29 05:26:19.304: INFO: l7-default-backend-8549d69d99-nw9t6 started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:19.304: INFO: Container default-http-backend ready: false, restart count 4 Jan 29 05:26:19.304: INFO: volume-snapshot-controller-0 started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:19.304: INFO: Container volume-snapshot-controller ready: true, restart count 11 Jan 29 05:26:47.648: INFO: Latency metrics for node bootstrap-e2e-minion-group-fr2s Jan 29 05:26:47.648: INFO: Logging node info for node bootstrap-e2e-minion-group-q3jk Jan 29 05:27:47.692: INFO: Error getting node info Get "https://34.145.111.53/api/v1/nodes/bootstrap-e2e-minion-group-q3jk": stream error: stream ID 1287; INTERNAL_ERROR; received from peer Jan 29 05:27:47.692: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:27:47.692: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-q3jk Jan 29 05:27:47.740: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-q3jk Jan 29 05:28:20.596: INFO: coredns-6846b5b5f-cgf5q started at 2023-01-29 04:56:28 +0000 UTC (0+1 container statuses recorded) Jan 29 05:28:20.596: INFO: Container coredns ready: true, restart count 8 Jan 29 05:28:20.596: INFO: kube-proxy-bootstrap-e2e-minion-group-q3jk started at 2023-01-29 04:56:08 +0000 UTC (0+1 container statuses recorded) Jan 29 05:28:20.596: INFO: Container kube-proxy ready: true, restart count 6 Jan 29 05:28:20.596: INFO: metadata-proxy-v0.1-bjzbd started at 2023-01-29 04:56:09 +0000 UTC (0+2 container statuses recorded) Jan 29 05:28:20.596: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 05:28:20.596: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 05:28:20.596: INFO: konnectivity-agent-fn54g started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:28:20.596: INFO: Container konnectivity-agent ready: true, restart count 8 Jan 29 05:28:20.837: INFO: Latency metrics for node bootstrap-e2e-minion-group-q3jk END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:28:20.837 (2m7.109s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:28:20.837 (2m7.109s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:28:20.837 STEP: Destroying namespace "reboot-786" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 05:28:20.837 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:28:30.79 (9.953s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:28:30.79 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:28:30.79 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\sswitching\soff\sthe\snetwork\sinterface\sand\sensure\sthey\sfunction\supon\sswitch\son$'
[FAILED] wait for service account "default" in namespace "reboot-786": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 05:23:35.545from junit_01.xml
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:21:35.422 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:21:35.422 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:21:35.422 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 05:21:35.422 Jan 29 05:21:35.422: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 05:21:35.424 Jan 29 05:23:35.545: INFO: Unexpected error: <*fmt.wrapError | 0xc00537a000>: { msg: "wait for service account \"default\" in namespace \"reboot-786\": timed out waiting for the condition", err: <*errors.errorString | 0xc000111ce0>{ s: "timed out waiting for the condition", }, } [FAILED] wait for service account "default" in namespace "reboot-786": timed out waiting for the condition In [BeforeEach] at: test/e2e/framework/framework.go:251 @ 01/29/23 05:23:35.545 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:23:35.545 (2m0.123s) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:23:35.545 STEP: Collecting events from namespace "kube-system". - test/e2e/cloud/gcp/reboot.go:73 @ 01/29/23 05:23:35.545 Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-cgf5q to bootstrap-e2e-minion-group-q3jk Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 952.42369ms (952.43546ms including waiting) Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container coredns Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container coredns Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Readiness probe failed: Get "http://10.64.2.3:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Liveness probe failed: Get "http://10.64.2.3:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Container coredns failed liveness probe, will be restarted Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container coredns Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Readiness probe failed: Get "http://10.64.2.6:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-cgf5q Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-cgf5q Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container coredns Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container coredns Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container coredns Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-cgf5q_kube-system(aeeef6ad-37df-4830-8ddf-a2fa49dc0afb) Jan 29 05:23:35.630: INFO: event for coredns-6846b5b5f-cgf5q: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Readiness probe failed: Get "http://10.64.2.13:8181/ready": dial tcp 10.64.2.13:8181: connect: connection refused Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-cgf5q: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6846b5b5f-slgkj to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.10.0" Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.10.0" in 1.898971681s (1.89898123s including waiting) Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container coredns Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container coredns Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container coredns Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.3:8181/ready": dial tcp 10.64.1.3:8181: connect: connection refused Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.14:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Liveness probe failed: Get "http://10.64.1.14:8080/health": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-slgkj_kube-system(dbbd495d-f306-4c8c-894e-7ffeed82522f) Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.17:8181/ready": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-slgkj Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/coredns-6846b5b5f-slgkj Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.10.0" already present on machine Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container coredns Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container coredns Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: Get "http://10.64.1.28:8181/ready": dial tcp 10.64.1.28:8181: connect: connection refused Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container coredns Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container coredns in pod coredns-6846b5b5f-slgkj_kube-system(dbbd495d-f306-4c8c-894e-7ffeed82522f) Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f-slgkj: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-slgkj Jan 29 05:23:35.631: INFO: event for coredns-6846b5b5f: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6846b5b5f-cgf5q Jan 29 05:23:35.631: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 1 Jan 29 05:23:35.631: INFO: event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6846b5b5f to 2 from 1 Jan 29 05:23:35.631: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 05:23:35.631: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 05:23:35.631: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 05:23:35.631: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 05:23:35.631: INFO: event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-bootstrap-e2e-master_kube-system(2ef2f0d9ccfe01aa3c1d26059de8a300) Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.7-0" already present on machine Jan 29 05:23:35.631: INFO: event for etcd-server-events-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container etcd-container in pod etcd-server-events-bootstrap-e2e-master_kube-system(9f090652556c0eb7722415ec1d3682eb) Jan 29 05:23:35.631: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_3ae10 became leader Jan 29 05:23:35.631: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_dffea became leader Jan 29 05:23:35.631: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_ed5cc became leader Jan 29 05:23:35.631: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_794a2 became leader Jan 29 05:23:35.631: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_5e980 became leader Jan 29 05:23:35.631: INFO: event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_c0a4b became leader Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-6hl7x to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 3.144705382s (3.144721595s including waiting) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6hl7x_kube-system(52759282-0d41-4927-b752-92975d4abd4b) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Liveness probe failed: Get "http://10.64.1.11:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-6hl7x_kube-system(52759282-0d41-4927-b752-92975d4abd4b) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-6hl7x: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-9b2fb to bootstrap-e2e-minion-group-8xzv Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 645.890763ms (645.907319ms including waiting) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "http://10.64.3.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9b2fb_kube-system(3a803d1f-02e7-4777-9121-bdfdc7214e10) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-9b2fb_kube-system(3a803d1f-02e7-4777-9121-bdfdc7214e10) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-9b2fb: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-fn54g to bootstrap-e2e-minion-group-q3jk Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" in 612.375349ms (612.383552ms including waiting) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Liveness probe failed: Get "http://10.64.2.2:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Unhealthy: Liveness probe failed: Get "http://10.64.2.4:8093/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Container konnectivity-agent failed liveness probe, will be restarted Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container konnectivity-agent Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {kubelet bootstrap-e2e-minion-group-q3jk} BackOff: Back-off restarting failed container konnectivity-agent in pod konnectivity-agent-fn54g_kube-system(5010991f-c4fa-4022-b57f-06a1df1b8839) Jan 29 05:23:35.631: INFO: event for konnectivity-agent-fn54g: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-6hl7x Jan 29 05:23:35.631: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-9b2fb Jan 29 05:23:35.631: INFO: event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-fn54g Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container konnectivity-server-container Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container konnectivity-server-container Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container konnectivity-server-container Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1" already present on machine Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container konnectivity-server-container in pod konnectivity-server-bootstrap-e2e-master_kube-system(122c336be1dd86824540422433813d8a) Jan 29 05:23:35.631: INFO: event for konnectivity-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://127.0.0.1:8133/healthz": dial tcp 127.0.0.1:8133: connect: connection refused Jan 29 05:23:35.631: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-addon-manager Jan 29 05:23:35.631: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-addon-manager Jan 29 05:23:35.631: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-addon-manager Jan 29 05:23:35.631: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6" already present on machine Jan 29 05:23:35.631: INFO: event for kube-addon-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-addon-manager in pod kube-addon-manager-bootstrap-e2e-master_kube-system(ecad253bdb3dfebf3d39882505699622) Jan 29 05:23:35.631: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 05:23:35.631: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 05:23:35.631: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Jan 29 05:23:35.631: INFO: event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:443/livez?exclude=etcd&exclude=kms-provider-0&exclude=kms-provider-1": dial tcp 127.0.0.1:443: connect: connection refused Jan 29 05:23:35.631: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Jan 29 05:23:35.631: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Jan 29 05:23:35.631: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(a9901ac1fc908c01cd17c25062859343) Jan 29 05:23:35.631: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Jan 29 05:23:35.631: INFO: event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_851f0e92-d2b2-4cde-86fc-61b887267173 became leader Jan 29 05:23:35.631: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_4df1cff4-9e2b-4aeb-9add-320edc370972 became leader Jan 29 05:23:35.631: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_92df568f-7c41-431d-807f-71ca5118c228 became leader Jan 29 05:23:35.631: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_60dcade0-c8ca-4fde-976e-1913e57f00ec became leader Jan 29 05:23:35.631: INFO: event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_a86ab92f-b8ec-4c76-831b-83c0d8467492 became leader Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-4cpk6 to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 1.762583689s (1.762598755s including waiting) Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container autoscaler Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container autoscaler Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container autoscaler Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-4cpk6 Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-4cpk6_kube-system(e3c2ac3f-c229-4e3c-b75e-20da721f6be0) Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/kube-dns-autoscaler-5f6455f985-4cpk6 Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" already present on machine Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container autoscaler Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container autoscaler Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container autoscaler Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985-4cpk6: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container autoscaler in pod kube-dns-autoscaler-5f6455f985-4cpk6_kube-system(e3c2ac3f-c229-4e3c-b75e-20da721f6be0) Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-4cpk6 Jan 29 05:23:35.631: INFO: event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-8xzv_kube-system(f235327fad7051b81c0d60b9bd4fc9cd) Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-8xzv_kube-system(f235327fad7051b81c0d60b9bd4fc9cd) Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-8xzv: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-fr2s_kube-system(4bc9af4e1f2e0f804199bc97b6d57205) Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-fr2s: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-q3jk_kube-system(44fdbb00bb3eea51169ca9d04a5a869e) Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} Killing: Stopping container kube-proxy Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {kubelet bootstrap-e2e-minion-group-q3jk} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-q3jk_kube-system(44fdbb00bb3eea51169ca9d04a5a869e) Jan 29 05:23:35.631: INFO: event for kube-proxy-bootstrap-e2e-minion-group-q3jk: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2" already present on machine Jan 29 05:23:35.631: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Jan 29 05:23:35.631: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Jan 29 05:23:35.631: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-scheduler Jan 29 05:23:35.631: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-scheduler in pod kube-scheduler-bootstrap-e2e-master_kube-system(b286b0d19b475d76fb3eba5bf7889986) Jan 29 05:23:35.631: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_e085793b-531d-4e46-9a13-2df2b0a0cf3c became leader Jan 29 05:23:35.631: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_bae8712b-9a00-4c39-8044-92141d52bf42 became leader Jan 29 05:23:35.631: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_8a10b01f-10b3-404d-8242-a505ae074a1a became leader Jan 29 05:23:35.631: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_19f6a6e9-ecb9-4903-9847-6c580f807c75 became leader Jan 29 05:23:35.631: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_566eeaec-37f0-4f51-ab7a-1360175e11f9 became leader Jan 29 05:23:35.631: INFO: event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_0f0f0a28-ea8c-4c76-9364-eb4f5c2dd2b2 became leader Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-nw9t6 to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 953.200505ms (953.207255ms including waiting) Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container default-http-backend Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container default-http-backend Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Liveness probe failed: Get "http://10.64.1.4:8080/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Container default-http-backend failed liveness probe, will be restarted Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-nw9t6 Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/l7-default-backend-8549d69d99-nw9t6 Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" already present on machine Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container default-http-backend Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container default-http-backend Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99-nw9t6: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-nw9t6 Jan 29 05:23:35.631: INFO: event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Jan 29 05:23:35.631: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Jan 29 05:23:35.631: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Jan 29 05:23:35.631: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Jan 29 05:23:35.631: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Jan 29 05:23:35.631: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container l7-lb-controller Jan 29 05:23:35.631: INFO: event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-5sc67 to bootstrap-e2e-minion-group-8xzv Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 814.209742ms (814.233537ms including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.012691866s (2.01271786s including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-5sc67: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-bjzbd to bootstrap-e2e-minion-group-q3jk Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 688.204291ms (688.219796ms including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.604765022s (1.604774619s including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {kubelet bootstrap-e2e-minion-group-q3jk} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-bjzbd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-kn874 to bootstrap-e2e-master Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 908.820679ms (908.830517ms including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.340504431s (2.340511774s including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-kn874: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-xmtst to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 680.221606ms (680.236479ms including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.718964429s (1.718982952s including waiting) Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metadata-proxy Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container prometheus-to-sd-exporter Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1-xmtst: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-kn874 Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-xmtst Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-bjzbd Jan 29 05:23:35.631: INFO: event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-5sc67 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-gsfr8 to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.583628574s (2.58363679s including waiting) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 2.516746563s (2.516753465s including waiting) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c-gsfr8: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-gsfr8 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulDelete: Deleted pod: metrics-server-v0.5.2-6764bf875c-gsfr8 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-867b8754b9-jrjtd to bootstrap-e2e-minion-group-8xzv Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 1.400479222s (1.400489838s including waiting) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.068205376s (1.068216228s including waiting) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: Get "https://10.64.3.3:10250/readyz": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "https://10.64.3.3:10250/livez": dial tcp 10.64.3.3:10250: connect: connection refused Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: Get "https://10.64.3.4:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "https://10.64.3.4:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-jrjtd_kube-system(f2309b34-237d-44df-b1a4-7ec957702321) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Created: Created container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Started: Started container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: Get "https://10.64.3.11:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Liveness probe failed: Get "https://10.64.3.11:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container metrics-server Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} Killing: Stopping container metrics-server-nanny Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-867b8754b9-jrjtd_kube-system(f2309b34-237d-44df-b1a4-7ec957702321) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {kubelet bootstrap-e2e-minion-group-8xzv} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-867b8754b9-jrjtd_kube-system(f2309b34-237d-44df-b1a4-7ec957702321) Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9-jrjtd: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2-867b8754b9: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-867b8754b9-jrjtd Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-867b8754b9 to 1 Jan 29 05:23:35.631: INFO: event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled down replica set metrics-server-v0.5.2-6764bf875c to 0 from 1 Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/1 nodes are available: 1 node(s) were unschedulable. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-fr2s Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.082767621s (2.082775326s including waiting) Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container volume-snapshot-controller Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container volume-snapshot-controller Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container volume-snapshot-controller Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(846294c9-7431-4763-8373-c9c072cf9808) Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod kube-system/volume-snapshot-controller-0 Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Created: Created container volume-snapshot-controller Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Started: Started container volume-snapshot-controller Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} Killing: Stopping container volume-snapshot-controller Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-fr2s} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(846294c9-7431-4763-8373-c9c072cf9808) Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller-0: {node-controller } NodeNotReady: Node is not ready Jan 29 05:23:35.631: INFO: event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:68 @ 01/29/23 05:23:35.631 (86ms) > Enter [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:23:35.631 Jan 29 05:23:35.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 29 05:23:35.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:35.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:35.680: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:37.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:37.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:37.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:41.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:41.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:41.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:43.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:43.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:43.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:45.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:45.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:45.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:47.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:47.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:47.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:55.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:55.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:55.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:57.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:57.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:57.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:59.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:59.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:23:59.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:01.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:01.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:01.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:03.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:03.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:03.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:07.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:07.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:07.746: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:09.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:09.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:09.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:11.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:11.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:11.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:13.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:13.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:13.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:15.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:15.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:15.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:17.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:17.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:17.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:19.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:19.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:19.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:21.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:21.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:21.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:23.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:23.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:23.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:25.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:25.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:25.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:27.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:27.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:27.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:29.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:29.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:29.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:31.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:31.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:31.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:33.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:33.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:33.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:35.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:35.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:35.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:37.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:37.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:37.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:41.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:41.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:41.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:43.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:43.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:43.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:45.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:45.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:45.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:47.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:47.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:47.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:53.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:53.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:53.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:55.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:55.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:55.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:57.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:57.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:57.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:59.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:59.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:24:59.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:01.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:01.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:01.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:03.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:03.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:03.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:07.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:07.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:07.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:09.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:09.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:09.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:11.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:11.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:11.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:13.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:13.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:13.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:15.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:15.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:15.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:17.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:17.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:17.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:19.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:19.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:19.733: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:21.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:21.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:21.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:23.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:23.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:23.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:25.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:25.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:25.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:27.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:27.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:27.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:29.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:29.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:29.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:31.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:31.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:31.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:33.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:33.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:33.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:35.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:35.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:35.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:37.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:37.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:37.729: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:39.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:41.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:41.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:41.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:43.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:43.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:43.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:45.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:45.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:45.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:47.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:47.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:47.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:49.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:51.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:53.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:55.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:55.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:55.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:57.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:57.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:57.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:59.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:59.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:25:59.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:01.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:01.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:01.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:03.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:03.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:03.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:05.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:07.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:07.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:07.727: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:26:09.735: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 05:26:08 +0000 UTC}]. Failure Jan 29 05:26:11.726: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable NoExecute 2023-01-29 05:26:08 +0000 UTC}]. Failure < Exit [AfterEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/node/init/init.go:33 @ 01/29/23 05:26:13.728 (2m38.096s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:26:13.728 STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:26:13.728 STEP: Collecting events from namespace "reboot-786". - test/e2e/framework/debug/dump.go:42 @ 01/29/23 05:26:13.728 STEP: Found 0 events. - test/e2e/framework/debug/dump.go:46 @ 01/29/23 05:26:13.771 Jan 29 05:26:13.813: INFO: POD NODE PHASE GRACE CONDITIONS Jan 29 05:26:13.813: INFO: Jan 29 05:26:13.881: INFO: Logging node info for node bootstrap-e2e-master Jan 29 05:26:13.924: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master 573932df-4ac9-4a16-9c02-0cca288f19f4 3104 0 2023-01-29 04:56:05 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2023-01-29 04:56:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 04:56:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2023-01-29 05:22:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858370560 0} {<nil>} 3767940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596226560 0} {<nil>} 3511940Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:22:41 +0000 UTC,LastTransitionTime:2023-01-29 04:56:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:22:41 +0000 UTC,LastTransitionTime:2023-01-29 04:56:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:22:41 +0000 UTC,LastTransitionTime:2023-01-29 04:56:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:22:41 +0000 UTC,LastTransitionTime:2023-01-29 04:56:25 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.145.111.53,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:6902beac0ad0c174454f307f49ae755d,SystemUUID:6902beac-0ad0-c174-454f-307f49ae755d,BootID:8368e14e-fc42-4513-ba6d-e7ce07a08226,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:135952851,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:125275449,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83 registry.k8s.io/etcd:3.5.7-0],SizeBytes:101639218,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:57552184,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:b1389e7014425a1752aac55f5043ef4c52edaef0e223bf4d48ed1324e298087c registry.k8s.io/kas-network-proxy/proxy-server:v0.1.1],SizeBytes:21875112,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:26:13.925: INFO: Logging kubelet events for node bootstrap-e2e-master Jan 29 05:26:13.981: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Jan 29 05:26:14.112: INFO: etcd-server-events-bootstrap-e2e-master started at 2023-01-29 04:55:17 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container etcd-container ready: true, restart count 3 Jan 29 05:26:14.112: INFO: etcd-server-bootstrap-e2e-master started at 2023-01-29 04:55:17 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container etcd-container ready: true, restart count 4 Jan 29 05:26:14.112: INFO: l7-lb-controller-bootstrap-e2e-master started at 2023-01-29 04:55:37 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container l7-lb-controller ready: true, restart count 8 Jan 29 05:26:14.112: INFO: konnectivity-server-bootstrap-e2e-master started at 2023-01-29 04:55:17 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container konnectivity-server-container ready: true, restart count 5 Jan 29 05:26:14.112: INFO: kube-apiserver-bootstrap-e2e-master started at 2023-01-29 04:55:17 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container kube-apiserver ready: true, restart count 2 Jan 29 05:26:14.112: INFO: kube-addon-manager-bootstrap-e2e-master started at 2023-01-29 04:55:37 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container kube-addon-manager ready: true, restart count 4 Jan 29 05:26:14.112: INFO: metadata-proxy-v0.1-kn874 started at 2023-01-29 04:56:34 +0000 UTC (0+2 container statuses recorded) Jan 29 05:26:14.112: INFO: Container metadata-proxy ready: true, restart count 0 Jan 29 05:26:14.112: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Jan 29 05:26:14.112: INFO: kube-controller-manager-bootstrap-e2e-master started at 2023-01-29 04:55:17 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container kube-controller-manager ready: true, restart count 8 Jan 29 05:26:14.112: INFO: kube-scheduler-bootstrap-e2e-master started at 2023-01-29 04:55:17 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.112: INFO: Container kube-scheduler ready: true, restart count 7 Jan 29 05:26:14.328: INFO: Latency metrics for node bootstrap-e2e-master Jan 29 05:26:14.328: INFO: Logging node info for node bootstrap-e2e-minion-group-8xzv Jan 29 05:26:14.385: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-8xzv 30426c99-1665-4753-a8aa-3e12ad653388 3221 0 2023-01-29 04:56:09 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-8xzv kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 05:14:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 05:25:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2023-01-29 05:26:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status} {kube-controller-manager Update v1 2023-01-29 05:26:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} }]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-minion-group-8xzv,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:True,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:25:25 +0000 UTC,Reason:FrequentKubeletRestart,Message:Found 7 matching logs, which meets the threshold of 5,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.168.157.136,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-8xzv.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-8xzv.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:863ee453ce39c71dfd70eb604edc1f2d,SystemUUID:863ee453-ce39-c71d-fd70-eb604edc1f2d,BootID:fb37bfad-63dd-4512-ad64-a121dafde00b,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:26:14.386: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-8xzv Jan 29 05:26:14.445: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-8xzv Jan 29 05:26:14.553: INFO: konnectivity-agent-9b2fb started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.553: INFO: Container konnectivity-agent ready: true, restart count 7 Jan 29 05:26:14.553: INFO: metrics-server-v0.5.2-867b8754b9-jrjtd started at 2023-01-29 04:56:43 +0000 UTC (0+2 container statuses recorded) Jan 29 05:26:14.553: INFO: Container metrics-server ready: false, restart count 8 Jan 29 05:26:14.553: INFO: Container metrics-server-nanny ready: false, restart count 8 Jan 29 05:26:14.553: INFO: kube-proxy-bootstrap-e2e-minion-group-8xzv started at 2023-01-29 04:56:09 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:14.553: INFO: Container kube-proxy ready: true, restart count 6 Jan 29 05:26:14.553: INFO: metadata-proxy-v0.1-5sc67 started at 2023-01-29 04:56:10 +0000 UTC (0+2 container statuses recorded) Jan 29 05:26:14.553: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 05:26:14.553: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 05:26:19.048: INFO: Latency metrics for node bootstrap-e2e-minion-group-8xzv Jan 29 05:26:19.048: INFO: Logging node info for node bootstrap-e2e-minion-group-fr2s Jan 29 05:26:19.091: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-fr2s 0621ab1c-3d02-4018-837d-bc99627df4e9 3268 0 2023-01-29 04:56:06 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-fr2s kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2023-01-29 04:56:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2023-01-29 05:14:02 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {node-problem-detector Update v1 2023-01-29 05:25:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2023-01-29 05:26:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}}}} } {kubelet Update v1 2023-01-29 05:26:08 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{},"f:nodeInfo":{"f:bootID":{}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-kubeadm-ci-1-6/us-west1-b/bootstrap-e2e-minion-group-fr2s,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:FrequentKubeletRestart,Status:True,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:25:25 +0000 UTC,Reason:FrequentKubeletRestart,Message:Found 7 matching logs, which meets the threshold of 5,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2023-01-29 05:25:26 +0000 UTC,LastTransitionTime:2023-01-29 05:15:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2023-01-29 04:56:21 +0000 UTC,LastTransitionTime:2023-01-29 04:56:21 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2023-01-29 05:26:08 +0000 UTC,LastTransitionTime:2023-01-29 05:26:08 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:104.196.249.18,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-fr2s.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-fr2s.c.k8s-jkns-e2e-kubeadm-ci-1-6.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7cc854fd12580be1e80a1147a3c758d9,SystemUUID:7cc854fd-1258-0be1-e80a-1147a3c758d9,BootID:3d14e660-eaa8-4058-979f-f9970caf9460,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.3-2-gaccb53cab,KubeletVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,KubeProxyVersion:v1.27.0-alpha.1.73+8e642d3d0deab2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.1.73_8e642d3d0deab2],SizeBytes:66989256,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:017727efcfeb7d053af68e51436ce8e65edbc6ca573720afb4f79c8594036955 registry.k8s.io/coredns/coredns:v1.10.0],SizeBytes:15273057,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:939c42e815e6b6af3181f074652c0d18fe429fcee9b49c1392aee7e92887cfef registry.k8s.io/kas-network-proxy/proxy-agent:v0.1.1],SizeBytes:8364694,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:26:19.091: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-fr2s Jan 29 05:26:19.140: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-fr2s Jan 29 05:26:19.304: INFO: coredns-6846b5b5f-slgkj started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:19.304: INFO: Container coredns ready: true, restart count 6 Jan 29 05:26:19.304: INFO: kube-dns-autoscaler-5f6455f985-4cpk6 started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:19.304: INFO: Container autoscaler ready: true, restart count 5 Jan 29 05:26:19.304: INFO: metadata-proxy-v0.1-xmtst started at 2023-01-29 04:56:07 +0000 UTC (0+2 container statuses recorded) Jan 29 05:26:19.304: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 05:26:19.304: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 05:26:19.304: INFO: konnectivity-agent-6hl7x started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:19.304: INFO: Container konnectivity-agent ready: true, restart count 9 Jan 29 05:26:19.304: INFO: kube-proxy-bootstrap-e2e-minion-group-fr2s started at 2023-01-29 04:56:06 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:19.304: INFO: Container kube-proxy ready: true, restart count 6 Jan 29 05:26:19.304: INFO: l7-default-backend-8549d69d99-nw9t6 started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:19.304: INFO: Container default-http-backend ready: false, restart count 4 Jan 29 05:26:19.304: INFO: volume-snapshot-controller-0 started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:26:19.304: INFO: Container volume-snapshot-controller ready: true, restart count 11 Jan 29 05:26:47.648: INFO: Latency metrics for node bootstrap-e2e-minion-group-fr2s Jan 29 05:26:47.648: INFO: Logging node info for node bootstrap-e2e-minion-group-q3jk Jan 29 05:27:47.692: INFO: Error getting node info Get "https://34.145.111.53/api/v1/nodes/bootstrap-e2e-minion-group-q3jk": stream error: stream ID 1287; INTERNAL_ERROR; received from peer Jan 29 05:27:47.692: INFO: Node Info: &Node{ObjectMeta:{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{},Allocatable:ResourceList{},Phase:,Conditions:[]NodeCondition{},Addresses:[]NodeAddress{},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:0,},},NodeInfo:NodeSystemInfo{MachineID:,SystemUUID:,BootID:,KernelVersion:,OSImage:,ContainerRuntimeVersion:,KubeletVersion:,KubeProxyVersion:,OperatingSystem:,Architecture:,},Images:[]ContainerImage{},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Jan 29 05:27:47.692: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-q3jk Jan 29 05:27:47.740: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-q3jk Jan 29 05:28:20.596: INFO: coredns-6846b5b5f-cgf5q started at 2023-01-29 04:56:28 +0000 UTC (0+1 container statuses recorded) Jan 29 05:28:20.596: INFO: Container coredns ready: true, restart count 8 Jan 29 05:28:20.596: INFO: kube-proxy-bootstrap-e2e-minion-group-q3jk started at 2023-01-29 04:56:08 +0000 UTC (0+1 container statuses recorded) Jan 29 05:28:20.596: INFO: Container kube-proxy ready: true, restart count 6 Jan 29 05:28:20.596: INFO: metadata-proxy-v0.1-bjzbd started at 2023-01-29 04:56:09 +0000 UTC (0+2 container statuses recorded) Jan 29 05:28:20.596: INFO: Container metadata-proxy ready: true, restart count 2 Jan 29 05:28:20.596: INFO: Container prometheus-to-sd-exporter ready: true, restart count 2 Jan 29 05:28:20.596: INFO: konnectivity-agent-fn54g started at 2023-01-29 04:56:21 +0000 UTC (0+1 container statuses recorded) Jan 29 05:28:20.596: INFO: Container konnectivity-agent ready: true, restart count 8 Jan 29 05:28:20.837: INFO: Latency metrics for node bootstrap-e2e-minion-group-q3jk END STEP: dump namespace information after failure - test/e2e/framework/framework.go:284 @ 01/29/23 05:28:20.837 (2m7.109s) < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - dump namespaces | framework.go:206 @ 01/29/23 05:28:20.837 (2m7.109s) > Enter [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:28:20.837 STEP: Destroying namespace "reboot-786" for this suite. - test/e2e/framework/framework.go:347 @ 01/29/23 05:28:20.837 < Exit [DeferCleanup (Each)] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - tear down framework | framework.go:203 @ 01/29/23 05:28:30.79 (9.953s) > Enter [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:28:30.79 < Exit [ReportAfterEach] TOP-LEVEL - test/e2e/e2e_test.go:144 @ 01/29/23 05:28:30.79 (0s)
Filter through log files
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[It\]\s\[sig\-cloud\-provider\-gcp\]\sReboot\s\[Disruptive\]\s\[Feature\:Reboot\]\seach\snode\sby\striggering\skernel\spanic\sand\sensure\sthey\sfunction\supon\srestart$'
[FAILED] Test failed; at least one node failed to reboot in the time given. In [It] at: test/e2e/cloud/gcp/reboot.go:190 @ 01/29/23 05:19:05.179
> Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:12:15.22 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/cloud/gcp/reboot.go:60 @ 01/29/23 05:12:15.22 (0s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:12:15.22 STEP: Creating a kubernetes client - test/e2e/framework/framework.go:208 @ 01/29/23 05:12:15.221 Jan 29 05:12:15.221: INFO: >>> kubeConfig: /workspace/.kube/config STEP: Building a namespace api object, basename reboot - test/e2e/framework/framework.go:247 @ 01/29/23 05:12:15.222 Jan 29 05:12:15.261: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:12:17.301: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:12:19.303: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:12:21.303: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused Jan 29 05:12:23.301: INFO: Unexpected error while creating namespace: Post "https://34.145.111.53/api/v1/namespaces": dial tcp 34.145.111.53:443: connect: connection refused STEP: Waiting for a default service account to be provisioned in namespace - test/e2e/framework/framework.go:256 @ 01/29/23 05:13:12.995 STEP: Waiting for kube-root-ca.crt to be provisioned in namespace - test/e2e/framework/framework.go:259 @ 01/29/23 05:13:13.102 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - set up framework | framework.go:188 @ 01/29/23 05:13:13.184 (57.964s) > Enter [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 05:13:13.184 < Exit [BeforeEach] [sig-cloud-provider-gcp] Reboot [Disruptive] [Feature:Reboot] - test/e2e/framework/metrics/init/init.go:33 @ 01/29/23 05:13:13.184 (0s) > Enter [It] each node by triggering kernel panic and ensure they function upon restart - test/e2e/cloud/gcp/reboot.go:109 @ 01/29/23 05:13:13.184 Jan 29 05:13:13.368: INFO: Getting bootstrap-e2e-minion-group-8xzv Jan 29 05:13:13.368: INFO: Getting bootstrap-e2e-minion-group-fr2s Jan 29 05:13:13.368: INFO: Getting bootstrap-e2e-minion-group-q3jk Jan 29 05:13:13.416: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-8xzv condition Ready to be true Jan 29 05:13:13.433: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-q3jk condition Ready to be true Jan 29 05:13:13.433: INFO: Waiting up to 20s for node bootstrap-e2e-minion-group-fr2s condition Ready to be true Jan 29 05:13:13.459: INFO: Node bootstrap-e2e-minion-group-8xzv has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:13:13.459: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:13:13.459: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-5sc67" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:13:13.459: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:13:13.476: INFO: Node bootstrap-e2e-minion-group-fr2s has 4 assigned pods with no liveness probes: [kube-dns-autoscaler-5f6455f985-4cpk6 kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0] Jan 29 05:13:13.476: INFO: Waiting up to 5m0s for 4 pods to be running and ready, or succeeded: [kube-dns-autoscaler-5f6455f985-4cpk6 kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0] Jan 29 05:13:13.476: INFO: Waiting up to 5m0s for pod "volume-snapshot-controller-0" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:13:13.476: INFO: Node bootstrap-e2e-minion-group-q3jk has 2 assigned pods with no liveness probes: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:13:13.476: INFO: Waiting up to 5m0s for 2 pods to be running and ready, or succeeded: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:13:13.476: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-bjzbd" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:13:13.476: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:13:13.476: INFO: Waiting up to 5m0s for pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:13:13.476: INFO: Waiting up to 5m0s for pod "kube-dns-autoscaler-5f6455f985-4cpk6" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:13:13.477: INFO: Waiting up to 5m0s for pod "metadata-proxy-v0.1-xmtst" in namespace "kube-system" to be "running and ready, or succeeded" Jan 29 05:13:13.503: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv": Phase="Running", Reason="", readiness=true. Elapsed: 44.184014ms Jan 29 05:13:13.503: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-8xzv" satisfied condition "running and ready, or succeeded" Jan 29 05:13:13.503: INFO: Pod "metadata-proxy-v0.1-5sc67": Phase="Running", Reason="", readiness=true. Elapsed: 44.288046ms Jan 29 05:13:13.503: INFO: Pod "metadata-proxy-v0.1-5sc67" satisfied condition "running and ready, or succeeded" Jan 29 05:13:13.503: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-8xzv metadata-proxy-v0.1-5sc67] Jan 29 05:13:13.503: INFO: Getting external IP address for bootstrap-e2e-minion-group-8xzv Jan 29 05:13:13.503: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-8xzv(34.168.157.136:22) Jan 29 05:13:13.524: INFO: Pod "volume-snapshot-controller-0": Phase="Running", Reason="", readiness=true. Elapsed: 47.847155ms Jan 29 05:13:13.524: INFO: Pod "volume-snapshot-controller-0" satisfied condition "running and ready, or succeeded" Jan 29 05:13:13.524: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6": Phase="Running", Reason="", readiness=true. Elapsed: 47.609954ms Jan 29 05:13:13.524: INFO: Pod "kube-dns-autoscaler-5f6455f985-4cpk6" satisfied condition "running and ready, or succeeded" Jan 29 05:13:13.524: INFO: Pod "metadata-proxy-v0.1-xmtst": Phase="Running", Reason="", readiness=true. Elapsed: 47.631034ms Jan 29 05:13:13.524: INFO: Pod "metadata-proxy-v0.1-xmtst" satisfied condition "running and ready, or succeeded" Jan 29 05:13:13.527: INFO: Pod "metadata-proxy-v0.1-bjzbd": Phase="Running", Reason="", readiness=true. Elapsed: 50.818372ms Jan 29 05:13:13.527: INFO: Pod "metadata-proxy-v0.1-bjzbd" satisfied condition "running and ready, or succeeded" Jan 29 05:13:13.527: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s": Phase="Running", Reason="", readiness=true. Elapsed: 50.791041ms Jan 29 05:13:13.527: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-fr2s" satisfied condition "running and ready, or succeeded" Jan 29 05:13:13.527: INFO: Wanted all 4 pods to be running and ready, or succeeded. Result: true. Pods: [kube-dns-autoscaler-5f6455f985-4cpk6 kube-proxy-bootstrap-e2e-minion-group-fr2s metadata-proxy-v0.1-xmtst volume-snapshot-controller-0] Jan 29 05:13:13.527: INFO: Getting external IP address for bootstrap-e2e-minion-group-fr2s Jan 29 05:13:13.527: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-fr2s(104.196.249.18:22) Jan 29 05:13:13.528: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk": Phase="Running", Reason="", readiness=true. Elapsed: 51.265072ms Jan 29 05:13:13.528: INFO: Pod "kube-proxy-bootstrap-e2e-minion-group-q3jk" satisfied condition "running and ready, or succeeded" Jan 29 05:13:13.528: INFO: Wanted all 2 pods to be running and ready, or succeeded. Result: true. Pods: [kube-proxy-bootstrap-e2e-minion-group-q3jk metadata-proxy-v0.1-bjzbd] Jan 29 05:13:13.528: INFO: Getting external IP address for bootstrap-e2e-minion-group-q3jk Jan 29 05:13:13.528: INFO: SSH "nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 &" on bootstrap-e2e-minion-group-q3jk(34.82.121.186:22) Jan 29 05:13:14.038: INFO: ssh prow@34.168.157.136:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 05:13:14.038: INFO: ssh prow@34.168.157.136:22: stdout: "" Jan 29 05:13:14.038: INFO: ssh prow@34.168.157.136:22: stderr: "" Jan 29 05:13:14.038: INFO: ssh prow@34.168.157.136:22: exit code: 0 Jan 29 05:13:14.038: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-8xzv condition Ready to be false Jan 29 05:13:14.062: INFO: ssh prow@104.196.249.18:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 05:13:14.062: INFO: ssh prow@104.196.249.18:22: stdout: "" Jan 29 05:13:14.062: INFO: ssh prow@104.196.249.18:22: stderr: "" Jan 29 05:13:14.062: INFO: ssh prow@104.196.249.18:22: exit code: 0 Jan 29 05:13:14.062: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-fr2s condition Ready to be false Jan 29 05:13:14.063: INFO: ssh prow@34.82.121.186:22: command: nohup sh -c 'echo 1 | sudo tee /proc/sys/kernel/sysrq && sleep 10 && echo c | sudo tee /proc/sysrq-trigger' >/dev/null 2>&1 & Jan 29 05:13:14.063: INFO: ssh prow@34.82.121.186:22: stdout: "" Jan 29 05:13:14.063: INFO: ssh prow@34.82.121.186:22: stderr: "" Jan 29 05:13:14.063: INFO: ssh prow@34.82.121.186:22: exit code: 0 Jan 29 05:13:14.063: INFO: Waiting up to 2m0s for node bootstrap-e2e-minion-group-q3jk condition Ready to be false Jan 29 05:13:14.088: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:14.105: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:14.108: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:16.131: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:16.149: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:16.152: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:18.174: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:18.193: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:18.195: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:20.217: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:20.237: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:20.239: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:22.262: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:22.281: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:22.283: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:24.308: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:24.326: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:24.329: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:26.353: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:26.373: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:26.376: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:28.398: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:28.416: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:28.419: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:30.441: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:30.459: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:30.465: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:32.484: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:32.501: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:32.509: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:34.527: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:34.545: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:34.552: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:36.571: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:36.589: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:36.596: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:38.615: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:38.632: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:38.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:40.657: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:40.676: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:40.682: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:42.701: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:42.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:42.725: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:44.744: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:44.764: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:44.770: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:46.788: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:46.808: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:46.813: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:48.833: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:48.852: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:48.856: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:50.876: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:50.896: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:50.899: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:52.920: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:52.939: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:52.942: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:54.964: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:54.981: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:54.985: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:57.007: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:57.025: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:57.028: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:59.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:59.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:13:59.145: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:14:01.210: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:14:01.210: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:14:01.210: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is true instead of false. Reason: KubeletReady, message: kubelet is posting ready status. AppArmor enabled Jan 29 05:14:03.280: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-q3jk condition Ready to be true Jan 29 05:14:03.280: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-8xzv condition Ready to be true Jan 29 05:14:03.280: INFO: Waiting up to 5m0s for node bootstrap-e2e-minion-group-fr2s condition Ready to be true Jan 29 05:14:03.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:03.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:03.336: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:05.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:05.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:05.384: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:07.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:07.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:07.431: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:09.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:09.477: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:09.478: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:11.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:11.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:11.524: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:13.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:13.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:13.580: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:15.626: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:15.626: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:15.626: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:17.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:17.673: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:17.674: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:19.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:19.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:19.720: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:21.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:21.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:21.767: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:23.812: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:23.814: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:23.814: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:25.855: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:25.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:25.858: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:27.897: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:27.902: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:27.902: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:29.942: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:29.946: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:29.946: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:31.986: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:31.991: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:31.991: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:34.048: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:34.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:34.052: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:36.092: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:36.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:36.097: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:38.136: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:38.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:38.141: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:40.181: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:40.186: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:14:40.186: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:24.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:24.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:24.413: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:26.460: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:26.460: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:26.461: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:28.503: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:28.505: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:28.505: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:30.549: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:30.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:30.550: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:32.591: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:32.595: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:32.595: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:34.634: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:34.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:34.639: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:36.678: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:36.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:36.684: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:38.722: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:38.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:38.728: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:40.766: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:40.773: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:40.773: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:42.819: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:42.842: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:42.846: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:44.868: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:44.885: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:44.889: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:46.912: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:46.927: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:46.932: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:48.955: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:48.970: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:48.975: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:50.998: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:51.014: INFO: Condition Ready of node bootstrap-e2e-minion-group-8xzv is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:51.019: INFO: Condition Ready of node bootstrap-e2e-minion-group-fr2s is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:53.041: INFO: Condition Ready of node bootstrap-e2e-minion-group-q3jk is false instead of true. Reason: NodeStatusUnknown, message: Kubelet stopped posting node status. Jan 29 05:15:53.058: INFO: Condition Ready of node bootst